457 research outputs found
Channel-Recurrent Autoencoding for Image Modeling
Despite recent successes in synthesizing faces and bedrooms, existing
generative models struggle to capture more complex image types, potentially due
to the oversimplification of their latent space constructions. To tackle this
issue, building on Variational Autoencoders (VAEs), we integrate recurrent
connections across channels to both inference and generation steps, allowing
the high-level features to be captured in global-to-local, coarse-to-fine
manners. Combined with adversarial loss, our channel-recurrent VAE-GAN
(crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high
resolution images while maintaining the same level of computational efficacy.
Our model produces interpretable and expressive latent representations to
benefit downstream tasks such as image completion. Moreover, we propose two
novel regularizations, namely the KL objective weighting scheme over time steps
and mutual information maximization between transformed latent variables and
the outputs, to enhance the training.Comment: Code: https://github.com/WendyShang/crVAE. Supplementary Materials:
http://www-personal.umich.edu/~shangw/wacv18_supplementary_material.pd
A recurrent neural network for classification of unevenly sampled variable stars
Astronomical surveys of celestial sources produce streams of noisy time
series measuring flux versus time ("light curves"). Unlike in many other
physical domains, however, large (and source-specific) temporal gaps in data
arise naturally due to intranight cadence choices as well as diurnal and
seasonal constraints. With nightly observations of millions of variable stars
and transients from upcoming surveys, efficient and accurate discovery and
classification techniques on noisy, irregularly sampled data must be employed
with minimal human-in-the-loop involvement. Machine learning for inference
tasks on such data traditionally requires the laborious hand-coding of
domain-specific numerical summaries of raw data ("features"). Here we present a
novel unsupervised autoencoding recurrent neural network (RNN) that makes
explicit use of sampling times and known heteroskedastic noise properties. When
trained on optical variable star catalogs, this network produces supervised
classification models that rival other best-in-class approaches. We find that
autoencoded features learned on one time-domain survey perform nearly as well
when applied to another survey. These networks can continue to learn from new
unlabeled observations and may be used in other unsupervised tasks such as
forecasting and anomaly detection.Comment: 23 pages, 14 figures. The published version is at Nature Astronomy
(https://www.nature.com/articles/s41550-017-0321-z). Source code for models,
experiments, and figures at
https://github.com/bnaul/IrregularTimeSeriesAutoencoderPaper (Zenodo Code
DOI: 10.5281/zenodo.1045560
FlowFormer: A Transformer Architecture and Its Masked Cost Volume Autoencoding for Optical Flow
This paper introduces a novel transformer-based network architecture,
FlowFormer, along with the Masked Cost Volume AutoEncoding (MCVA) for
pretraining it to tackle the problem of optical flow estimation. FlowFormer
tokenizes the 4D cost-volume built from the source-target image pair and
iteratively refines flow estimation with a cost-volume encoder-decoder
architecture. The cost-volume encoder derives a cost memory with
alternate-group transformer~(AGT) layers in a latent space and the decoder
recurrently decodes flow from the cost memory with dynamic positional cost
queries. On the Sintel benchmark, FlowFormer architecture achieves 1.16 and
2.09 average end-point-error~(AEPE) on the clean and final pass, a 16.5\% and
15.5\% error reduction from the GMA~(1.388 and 2.47). MCVA enhances FlowFormer
by pretraining the cost-volume encoder with a masked autoencoding scheme, which
further unleashes the capability of FlowFormer with unlabeled data. This is
especially critical in optical flow estimation because ground truth flows are
more expensive to acquire than labels in other vision tasks. MCVA improves
FlowFormer all-sided and FlowFormer+MCVA ranks 1st among all published methods
on both Sintel and KITTI-2015 benchmarks and achieves the best generalization
performance. Specifically, FlowFormer+MCVA achieves 1.07 and 1.94 AEPE on the
Sintel benchmark, leading to 7.76\% and 7.18\% error reductions from
FlowFormer.Comment: arXiv admin note: substantial text overlap with arXiv:2203.16194,
arXiv:2303.0123
- …