87,076 research outputs found
Deep Reflectance Maps
Undoing the image formation process and therefore decomposing appearance into
its intrinsic properties is a challenging task due to the under-constraint
nature of this inverse problem. While significant progress has been made on
inferring shape, materials and illumination from images only, progress in an
unconstrained setting is still limited. We propose a convolutional neural
architecture to estimate reflectance maps of specular materials in natural
lighting conditions. We achieve this in an end-to-end learning formulation that
directly predicts a reflectance map from the image itself. We show how to
improve estimates by facilitating additional supervision in an indirect scheme
that first predicts surface orientation and afterwards predicts the reflectance
map by a learning-based sparse data interpolation.
In order to analyze performance on this difficult task, we propose a new
challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg)
using both synthetic and real images. Furthermore, we show the application of
our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM
Causal graphical models in systems genetics: A unified framework for joint inference of causal network and genetic architecture for correlated phenotypes
Causal inference approaches in systems genetics exploit quantitative trait
loci (QTL) genotypes to infer causal relationships among phenotypes. The
genetic architecture of each phenotype may be complex, and poorly estimated
genetic architectures may compromise the inference of causal relationships
among phenotypes. Existing methods assume QTLs are known or inferred without
regard to the phenotype network structure. In this paper we develop a
QTL-driven phenotype network method (QTLnet) to jointly infer a causal
phenotype network and associated genetic architecture for sets of correlated
phenotypes. Randomization of alleles during meiosis and the unidirectional
influence of genotype on phenotype allow the inference of QTLs causal to
phenotypes. Causal relationships among phenotypes can be inferred using these
QTL nodes, enabling us to distinguish among phenotype networks that would
otherwise be distribution equivalent. We jointly model phenotypes and QTLs
using homogeneous conditional Gaussian regression models, and we derive a
graphical criterion for distribution equivalence. We validate the QTLnet
approach in a simulation study. Finally, we illustrate with simulated data and
a real example how QTLnet can be used to infer both direct and indirect effects
of QTLs and phenotypes that co-map to a genomic region.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS288 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Genetic Programming Approach to Designing Convolutional Neural Network Architectures
The convolutional neural network (CNN), which is one of the deep learning
models, has seen much success in a variety of computer vision tasks. However,
designing CNN architectures still requires expert knowledge and a lot of trial
and error. In this paper, we attempt to automatically construct CNN
architectures for an image classification task based on Cartesian genetic
programming (CGP). In our method, we adopt highly functional modules, such as
convolutional blocks and tensor concatenation, as the node functions in CGP.
The CNN structure and connectivity represented by the CGP encoding method are
optimized to maximize the validation accuracy. To evaluate the proposed method,
we constructed a CNN architecture for the image classification task with the
CIFAR-10 dataset. The experimental result shows that the proposed method can be
used to automatically find the competitive CNN architecture compared with
state-of-the-art models.Comment: This is the revised version of the GECCO 2017 paper. The code of our
method is available at https://github.com/sg-nm/cgp-cn
BodyNet: Volumetric Inference of 3D Human Body Shapes
Human shape estimation is an important task for video editing, animation and
fashion industry. Predicting 3D human body shape from natural images, however,
is highly challenging due to factors such as variation in human bodies,
clothing and viewpoint. Prior methods addressing this problem typically attempt
to fit parametric body models with certain priors on pose and shape. In this
work we argue for an alternative representation and propose BodyNet, a neural
network for direct inference of volumetric body shape from a single image.
BodyNet is an end-to-end trainable network that benefits from (i) a volumetric
3D loss, (ii) a multi-view re-projection loss, and (iii) intermediate
supervision of 2D pose, 2D body part segmentation, and 3D pose. Each of them
results in performance improvement as demonstrated by our experiments. To
evaluate the method, we fit the SMPL model to our network output and show
state-of-the-art results on the SURREAL and Unite the People datasets,
outperforming recent approaches. Besides achieving state-of-the-art
performance, our method also enables volumetric body-part segmentation.Comment: Appears in: European Conference on Computer Vision 2018 (ECCV 2018).
27 page
- …