Facial Expression Transfer with Machine Learning

COMP4801 Final Year Project 2018 - 2019

About

This project aims to study the usage of Deep Learning and the Facial Action Coding System (FACS) in facial expression generation and its application in facial expression transfer. In this project, a software is developed using a Generative Adversarial Network model based on Action Units (AUs) annotations to achieve photorealistic facial expression synthesis. In particular, the software is build on top of the GANimation model and the software is further combined with the OpenFace toolkit to achieve facial expression transfer.

FACS

FACS is an anatomical system which is developed by Ekman and Friesen in 1978 in for facial expression measurement. In this system, facial expressions are divided into several independent sets of muscle movement. The concept of Action Units (AUs) is introduced to represent the correlation between each independent region of a facial expression and the involved facial muscle. For example, AU1 represent the movement of “Inner Brow Raiser” which involves the frontalis and pars medialis muscle.

GANimation

In 2018, the team of Pumarola introduced a novel GAN scheme conditioned by AU annotation for facial expression generation. The model produces a mapping from a single facial image and an AU intensity vector to a new image of the same facial identity under the desired facial expression. This model achieved an unpaired image to image translation. Instead of pairs of images of the same person under different facial expressions, only images with AU annotations are required for model training, which makes the model more general and flexible. Moreover, they made the network more robust to background and lighting condition changes by adding an attention layer which limits the network to only manipulate regions of images that are related to producing the new facial expressions.

OpenFace

OpenFace is a popular open source toolkit for facial behavior analysis, including facial action unit detection based on the research by the team of Baltrušaitis. It is able to extract 18 kinds of facial action units in 5 discrete levels of intensity.

Schedule

Phase 1
September 1 – 30, 2018 Detailed project plan
Project web page
Phase 2
Stage 1
October 1 – 21, 2018 Data Preprocessing
Training and testing of the Ganimation developed by Pumarola et al.
October 22 –
November 18, 2018
Build a desktop application that can receive AU intensity input and modify an image
Stage 2
November 19 –
December 31, 2018
Implementation of chaining result of OpenFace to the Ganimation model
January 6 – 20, 2019 Detailed interim report
January 21 –
February 14, 2019
Deliverable an application using OpenFace to achieve facial expression transfer
Extra Stage
February 15 –
April 5, 2019
A mobile application demo by calling api
April 6 –
April 14, 2019
Final Report
Phase 3
April 14, 2019 Deliverable:

Finalized tested implementation: finalized desktop application and/or a mobile version of the application
Project report

Our team

Supervisor: Dr. Dirk Schnieders
Student: CHOI Wai Yiu
Student: CHEUNG Tak Kin