23 research outputs found
Disentangling Factors of Variation by Mixing Them
We propose an approach to learn image representations that consist of
disentangled factors of variation without exploiting any manual labeling or
data domain knowledge. A factor of variation corresponds to an image attribute
that can be discerned consistently across a set of images, such as the pose or
color of objects. Our disentangled representation consists of a concatenation
of feature chunks, each chunk representing a factor of variation. It supports
applications such as transferring attributes from one image to another, by
simply mixing and unmixing feature chunks, and classification or retrieval
based on one or several attributes, by considering a user-specified subset of
feature chunks. We learn our representation without any labeling or knowledge
of the data domain, using an autoencoder architecture with two novel training
objectives: first, we propose an invariance objective to encourage that
encoding of each attribute, and decoding of each chunk, are invariant to
changes in other attributes and chunks, respectively; second, we include a
classification objective, which ensures that each chunk corresponds to a
consistently discernible attribute in the represented image, hence avoiding
degenerate feature mappings where some chunks are completely ignored. We
demonstrate the effectiveness of our approach on the MNIST, Sprites, and CelebA
datasets.Comment: CVPR 201
Challenges in Disentangling Independent Factors of Variation
We study the problem of building models that disentangle independent factors
of variation. Such models could be used to encode features that can efficiently
be used for classification and to transfer attributes between different images
in image synthesis. As data we use a weakly labeled training set. Our weak
labels indicate what single factor has changed between two data samples,
although the relative value of the change is unknown. This labeling is of
particular interest as it may be readily available without annotation costs. To
make use of weak labels we introduce an autoencoder model and train it through
constraints on image pairs and triplets. We formally prove that without
additional knowledge there is no guarantee that two images with the same factor
of variation will be mapped to the same feature. We call this issue the
reference ambiguity. Moreover, we show the role of the feature dimensionality
and adversarial training. We demonstrate experimentally that the proposed model
can successfully transfer attributes on several datasets, but show also cases
when the reference ambiguity occurs.Comment: Submitted to ICLR 201
FaceShop: Deep Sketch-based Face Image Editing
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.Comment: 13 pages, 20 figure
Probabilistic spatial analysis in quantitative microscopy with uncertainty-aware cell detection using deep Bayesian regression
The investigation of biological systems with three-dimensional microscopy demands automatic cell identification methods that not only are accurate but also can imply the uncertainty in their predictions. The use of deep learning to regress density maps is a popular successful approach for extracting cell coordinates from local peaks in a postprocessing step, which then, however, hinders any meaningful probabilistic output. We propose a framework that can operate on large microscopy images and output probabilistic predictions (i) by integrating deep Bayesian learning for the regression of uncertainty-aware density maps, where peak detection algorithms generate cell proposals, and (ii) by learning a mapping from prediction proposals to a probabilistic space that accurately represents the chances of a successful prediction. Using these calibrated predictions, we propose a probabilistic spatial analysis with Monte Carlo sampling. We demonstrate this in a bone marrow dataset, where our proposed methods reveal spatial patterns that are otherwise undetectable
Modality Attention and Sampling Enables Deep Learning with Heterogeneous Marker Combinations in Fluorescence Microscopy
Fluorescence microscopy allows for a detailed inspection of cells, cellular
networks, and anatomical landmarks by staining with a variety of
carefully-selected markers visualized as color channels. Quantitative
characterization of structures in acquired images often relies on automatic
image analysis methods. Despite the success of deep learning methods in other
vision applications, their potential for fluorescence image analysis remains
underexploited. One reason lies in the considerable workload required to train
accurate models, which are normally specific for a given combination of
markers, and therefore applicable to a very restricted number of experimental
settings. We herein propose Marker Sampling and Excite, a neural network
approach with a modality sampling strategy and a novel attention module that
together enable () flexible training with heterogeneous datasets with
combinations of markers and () successful utility of learned models on
arbitrary subsets of markers prospectively. We show that our single neural
network solution performs comparably to an upper bound scenario where an
ensemble of many networks is na\"ively trained for each possible marker
combination separately. In addition, we demonstrate the feasibility of our
framework in high-throughput biological analysis by revising a recent
quantitative characterization of bone marrow vasculature in 3D confocal
microscopy datasets. Not only can our work substantially ameliorate the use of
deep learning in fluorescence microscopy analysis, but it can also be utilized
in other fields with incomplete data acquisitions and missing modalities.Comment: 17 pages, 5 figures, 3 pages supplement (3 figures
Deep Learning-based Image Synthesis using Sketching and Example-based Techniques
The large amount of digital visual data produced by humanity every day creates demand for efficient computational tools to manage this data. In this thesis we develop and study sketch- and example-based image synthesis and image-retrieval techniques that support users to create a photorealistic image, given a visual concept in mind.
The sketch-based image retrieval system that we introduce in this thesis is designed to answer arbitrary queries that may go beyond searching for predefined object or scene categories. Our key idea is to combine sketch-based queries with interactive, semantic re-ranking of query results by leveraging deep feature representations learned for image classification. This allows us to cluster semantically similar images, re-rank based on the clusters, and present more meaningful results to the user. We report on two large-scale benchmarks and demonstrate that our re-ranking approach leads to significant improvements over the state of the art. A user study designed to evaluate a practical use case confirms the benefits of our approach.
Next, we develop a representation for fine-grained, example-based image retrieval. Given a query, we want to retrieve data items of the same class, and, in addition, rank these items according to intra-class similarity. In our training data we assume partial knowledge: class labels are available, but the intra-class attributes are not. To compensate for this knowledge gap we propose using autoencoders that can be trained to produce features both with and without labels. Our main hypothesis is that network architectures that incorporate an autoencoder can learn features that meaningfully cluster data based on the intra-class variability. We propose and compare three different architectures to construct our features. We perform experiments on four datasets and find that these architectures indeed improve fine-grained retrieval. In particular, we obtain state of the art performance on fine-grained sketch retrieval.
In the second part of this thesis we develop systems for interactive image editing applications. First, we present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits.
Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high-quality synthesis results without additional post-processing.
Finally, we propose novel systems for smart copy-paste, enabling the synthesis of high-quality results given a masked source image content and a target image context as input. Our systems naturally resolve both shading and geometric inconsistencies between source and target image, resulting in a merged output image that features the content from the pasted source image, seamlessly pasted into the target context. We introduce a novel training image transformation procedure that allows to train a deep convolutional neural network end-to-end to automatically learn a representation that is suitable for copy-pasting. Our training procedure works with any image dataset without additional information such as labels, and we demonstrate the effectiveness of our systems on multiple datasets