57 research outputs found
Disentangling Factors of Variation by Mixing Them
We propose an approach to learn image representations that consist of
disentangled factors of variation without exploiting any manual labeling or
data domain knowledge. A factor of variation corresponds to an image attribute
that can be discerned consistently across a set of images, such as the pose or
color of objects. Our disentangled representation consists of a concatenation
of feature chunks, each chunk representing a factor of variation. It supports
applications such as transferring attributes from one image to another, by
simply mixing and unmixing feature chunks, and classification or retrieval
based on one or several attributes, by considering a user-specified subset of
feature chunks. We learn our representation without any labeling or knowledge
of the data domain, using an autoencoder architecture with two novel training
objectives: first, we propose an invariance objective to encourage that
encoding of each attribute, and decoding of each chunk, are invariant to
changes in other attributes and chunks, respectively; second, we include a
classification objective, which ensures that each chunk corresponds to a
consistently discernible attribute in the represented image, hence avoiding
degenerate feature mappings where some chunks are completely ignored. We
demonstrate the effectiveness of our approach on the MNIST, Sprites, and CelebA
datasets.Comment: CVPR 201
Challenges in Disentangling Independent Factors of Variation
We study the problem of building models that disentangle independent factors
of variation. Such models could be used to encode features that can efficiently
be used for classification and to transfer attributes between different images
in image synthesis. As data we use a weakly labeled training set. Our weak
labels indicate what single factor has changed between two data samples,
although the relative value of the change is unknown. This labeling is of
particular interest as it may be readily available without annotation costs. To
make use of weak labels we introduce an autoencoder model and train it through
constraints on image pairs and triplets. We formally prove that without
additional knowledge there is no guarantee that two images with the same factor
of variation will be mapped to the same feature. We call this issue the
reference ambiguity. Moreover, we show the role of the feature dimensionality
and adversarial training. We demonstrate experimentally that the proposed model
can successfully transfer attributes on several datasets, but show also cases
when the reference ambiguity occurs.Comment: Submitted to ICLR 201
FaceShop: Deep Sketch-based Face Image Editing
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.Comment: 13 pages, 20 figure
Biological Activities of Chinese Propolis and Brazilian Propolis on Streptozotocin-Induced Type 1 Diabetes Mellitus in Rats
Propolis is a bee-collected natural product and has been proven to have various bioactivities. This study tested the effects of Chinese propolis and Brazilian propolis on streptozotocin-induced type 1 diabetes mellitus in Sprague-Dawley rats. The results showed that Chinese propolis and Brazilian propolis significantly inhibited body weight loss and blood glucose increase in diabetic rats. In addition, Chinese propolis-treated rats showed an 8.4% reduction of glycated hemoglobin levels compared with untreated diabetic rats. Measurement of blood lipid metabolism showed dyslipidemia in diabetic rats and Chinese propolis helped to reduce total cholesterol level by 16.6%. Moreover, oxidative stress in blood, liver and kidney was improved to various degrees by both Chinese propolis and Brazilian propolis. An apparent reduction in levels of alanine transaminase, aspartate transaminase, blood urea nitrogen and urine microalbuminuria-excretion rate demonstrated the beneficial effects of propolis in hepatorenal function. All these results suggested that Chinese propolis and Brazilian propolis can alleviate symptoms of diabetes mellitus in rats and these effects may partially be due to their antioxidant ability
Not All Negatives Are Worth Attending to: Meta-Bootstrapping Negative Sampling Framework for Link Prediction
The rapid development of graph neural networks (GNNs) encourages the rising
of link prediction, achieving promising performance with various applications.
Unfortunately, through a comprehensive analysis, we surprisingly find that
current link predictors with dynamic negative samplers (DNSs) suffer from the
migration phenomenon between "easy" and "hard" samples, which goes against the
preference of DNS of choosing "hard" negatives, thus severely hindering
capability. Towards this end, we propose the MeBNS framework, serving as a
general plugin that can potentially improve current negative sampling based
link predictors. In particular, we elaborately devise a Meta-learning Supported
Teacher-student GNN (MST-GNN) that is not only built upon teacher-student
architecture for alleviating the migration between "easy" and "hard" samples
but also equipped with a meta learning based sample re-weighting module for
helping the student GNN distinguish "hard" samples in a fine-grained manner. To
effectively guide the learning of MST-GNN, we prepare a Structure enhanced
Training Data Generator (STD-Generator) and an Uncertainty based Meta Data
Collector (UMD-Collector) for supporting the teacher and student GNN,
respectively. Extensive experiments show that the MeBNS achieves remarkable
performance across six link prediction benchmark datasets
- …