124,457 research outputs found
Learn to synthesize and synthesize to learn
Attribute guided face image synthesis aims to manipulate attributes on a face
image. Most existing methods for image-to-image translation can either perform
a fixed translation between any two image domains using a single attribute or
require training data with the attributes of interest for each subject.
Therefore, these methods could only train one specific model for each pair of
image domains, which limits their ability in dealing with more than two
domains. Another disadvantage of these methods is that they often suffer from
the common problem of mode collapse that degrades the quality of the generated
images. To overcome these shortcomings, we propose attribute guided face image
generation method using a single model, which is capable to synthesize multiple
photo-realistic face images conditioned on the attributes of interest. In
addition, we adopt the proposed model to increase the realism of the simulated
face images while preserving the face characteristics. Compared to existing
models, synthetic face images generated by our method present a good
photorealistic quality on several face datasets. Finally, we demonstrate that
generated facial images can be used for synthetic data augmentation, and
improve the performance of the classifier used for facial expression
recognition.Comment: Accepted to Computer Vision and Image Understanding (CVIU
Learn to synthesize and synthesize to learn
Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition
Literature Review
Students should learn to develop a social scientific research question; distinguish scholarly from non-scholarly sources; locate and evaluate relevant scholarly literature; and synthesize multiple scholarly sources.https://digitalcommons.csumb.edu/teaching_all/1014/thumbnail.jp
Learning Flow-based Feature Warping for Face Frontalization with Illumination Inconsistent Supervision
Despite recent advances in deep learning-based face frontalization methods,
photo-realistic and illumination preserving frontal face synthesis is still
challenging due to large pose and illumination discrepancy during training. We
propose a novel Flow-based Feature Warping Model (FFWM) which can learn to
synthesize photo-realistic and illumination preserving frontal images with
illumination inconsistent supervision. Specifically, an Illumination Preserving
Module (IPM) is proposed to learn illumination preserving image synthesis from
illumination inconsistent image pairs. IPM includes two pathways which
collaborate to ensure the synthesized frontal images are illumination
preserving and with fine details. Moreover, a Warp Attention Module (WAM) is
introduced to reduce the pose discrepancy in the feature level, and hence to
synthesize frontal images more effectively and preserve more details of profile
images. The attention mechanism in WAM helps reduce the artifacts caused by the
displacements between the profile and the frontal images. Quantitative and
qualitative experimental results show that our FFWM can synthesize
photo-realistic and illumination preserving frontal images and performs
favorably against the state-of-the-art results.Comment: ECCV 2020. Code is available at: https://github.com/csyxwei/FFW
Modular Supervisory Synthesis for Unknown Plant Models Using Active Learning
This paper proposes an approach to synthesize a modular discrete-event supervisor to control a plant, the behavior model of which is unknown, so as to satisfy given specifications. To this end, the Modular Supervisor Learner (MSL) is presented that based on the known specifications and the structure of the system defines the configuration of the supervisors to learn. Then, by actively querying the simulation and interacting with the specification it explores the state-space of the system to learn a set of maximally permissive controllable supervisors
3D Face Reconstruction from Light Field Images: A Model-free Approach
Reconstructing 3D facial geometry from a single RGB image has recently
instigated wide research interest. However, it is still an ill-posed problem
and most methods rely on prior models hence undermining the accuracy of the
recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI)
obtained from light field cameras and learn CNN models that recover horizontal
and vertical 3D facial curves from the respective horizontal and vertical EPIs.
Our 3D face reconstruction network (FaceLFnet) comprises a densely connected
architecture to learn accurate 3D facial curves from low resolution EPIs. To
train the proposed FaceLFnets from scratch, we synthesize photo-realistic light
field images from 3D facial scans. The curve by curve 3D face estimation
approach allows the networks to learn from only 14K images of 80 identities,
which still comprises over 11 Million EPIs/curves. The estimated facial curves
are merged into a single pointcloud to which a surface is fitted to get the
final 3D face. Our method is model-free, requires only a few training samples
to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single
light field images under varying poses, expressions and lighting conditions.
Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces
reconstruction errors by over 20% compared to recent state of the art
What’s the Weather? [1st grade]
Students will engage in a twelve day unit delving in different facets of weather and the seasons. Lessons give students the chance to explore the differences between hot or cold, clear or cloudy, rainy or icy, and calm or windy. Students will learn to observe and record changes in the sky during the day and at night. They also learn how to interpret a thermometer. Students also have the chance to learn about the seasons, realizing that there is a difference between the seasons in Texas and the seasons in New England. At the end of the unit, students have to opportunity to synthesize their learning by pretending to go on a trip where they need to predict to the weather and appropriately prepare for the vacation
- …