DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling Xiaoguang Han Chang Gao Yizhou Yu The University of Hong Kong ACM Transactions on
Graphics (Proceedings of SIGGRAPH 2017) |
|
Fig.
1. Using our sketching system,
an amateur user can create 3D face or caricature models with complicated shape
and expression in a few minutes. Both models shown here were created in less
than 10 minutes by a user without any prior drawing and modeling experiences. |
|
Abstract |
Face modeling has been paid much attention
in the field of visual computing. There exist many scenarios, including
cartoon characters, avatars for social media, 3D face caricatures as well as
face-related art and design, where low-cost interactive face modeling is a
popular approach especially among amateur users. In this paper, we propose a
deep learning based sketching system for 3D face and caricature modeling.
This system has a labor-efficient sketching interface, that allows the user
to draw freehand imprecise yet expressive 2D lines representing the contours
of facial features. A novel CNN based deep regression network is designed for
inferring 3D face models from 2D sketches. Our network fuses both CNN and
shape based features of the input sketch, and has two independent branches of
fully connected layers generating independent subsets of coefficients for a
bilinear face representation. Our system also supports gesture based
interactions for users to further manipulate initial face models. Both user
studies and numerical results indicate that our sketching system can help
users create face models quickly and effectively. A significantly expanded
face database with diverse identities, expressions and levels of exaggeration
is constructed to promote further research and evaluation of face modeling
techniques. |
Download |
Paper Supplemental Demo Code Data |
Video |
|
Media Coverage |
SIGGRAPH 2017 Technical Papers
Preview Trailer ; NVIDIA
Newsletter |
Workflow |
|
Fig.
2. Our sketching system has
three interaction modes: the initial sketching mode, follow-up sketching
model and gesture-based refinement mode. In the initial sketching mode, the
3D face is updated immediately after each operation. The follow-up sketching
mode gets started when an output model (a) in the initial sketching model is
rendered to a sketch (b). A sequence of operations in this model are shown
from (b) to (h). Users can switch in real time from 2D sketching to 3D model
viewing (e.g. (d) to (i), (g) to (j) and (h) to
(k)). The created shape (k) can be refined in the gesture-based refinement
mode. (l) and (m) show the gestures used for depth depressing and bulging,
and the corresponding results after each operation are shown in (n) and (o).
A red solid arrow indicates a single operation while a dashed one means
several operations, and a blue arrow stands for model updating. |
|
Network
Architecture |
|
Fig.
3. Our network
architecture. |
|
Results Gallery |
|
Fig.
4. A gallery of results
created using our sketching system. On average, each model was created in
around 8 minutes. |
|
Acknowledgements |
The authors would like to thank the reviewers
for their constructive comments, and the participants of our user study for
their precious time. |
Bibtex |
@article{HanGY17, |
Copyright © 2017 Xiaoguang Han