1,598 research outputs found
Neural Face Editing with Intrinsic Image Disentangling
Traditional face editing methods often require a number of sophisticated and
task specific algorithms to be applied one after the other --- a process that
is tedious, fragile, and computationally intensive. In this paper, we propose
an end-to-end generative adversarial network that infers a face-specific
disentangled representation of intrinsic face properties, including shape (i.e.
normals), albedo, and lighting, and an alpha matte. We show that this network
can be trained on "in-the-wild" images by incorporating an in-network
physically-based image formation module and appropriate loss functions. Our
disentangling latent representation allows for semantically relevant edits,
where one aspect of facial appearance can be manipulated while keeping
orthogonal properties fixed, and we demonstrate its use for a number of facial
editing applications.Comment: CVPR 2017 ora
Physics and Chemistry from Parsimonious Representations: Image Analysis via Invariant Variational Autoencoders
Electron, optical, and scanning probe microscopy methods are generating ever
increasing volume of image data containing information on atomic and mesoscale
structures and functionalities. This necessitates the development of the
machine learning methods for discovery of physical and chemical phenomena from
the data, such as manifestations of symmetry breaking in electron and scanning
tunneling microscopy images, variability of the nanoparticles. Variational
autoencoders (VAEs) are emerging as a powerful paradigm for the unsupervised
data analysis, allowing to disentangle the factors of variability and discover
optimal parsimonious representation. Here, we summarize recent developments in
VAEs, covering the basic principles and intuition behind the VAEs. The
invariant VAEs are introduced as an approach to accommodate scale and
translation invariances present in imaging data and separate known factors of
variations from the ones to be discovered. We further describe the
opportunities enabled by the control over VAE architecture, including
conditional, semi-supervised, and joint VAEs. Several case studies of VAE
applications for toy models and experimental data sets in Scanning Transmission
Electron Microscopy are discussed, emphasizing the deep connection between VAE
and basic physical principles. All the codes used here are available at
https://github.com/saimani5/VAE-tutorials and this article can be used as an
application guide when applying these to own data sets.Comment: 55 pages, 16 figure
- …