25,093 research outputs found
Recycle-GAN: Unsupervised Video Retargeting
We introduce a data-driven approach for unsupervised video retargeting that
translates content from one domain to another while preserving the style native
to a domain, i.e., if contents of John Oliver's speech were to be transferred
to Stephen Colbert, then the generated content/speech should be in Stephen
Colbert's style. Our approach combines both spatial and temporal information
along with adversarial losses for content translation and style preservation.
In this work, we first study the advantages of using spatiotemporal constraints
over spatial constraints for effective retargeting. We then demonstrate the
proposed approach for the problems where information in both space and time
matters such as face-to-face translation, flower-to-flower, wind and cloud
synthesis, sunrise and sunset.Comment: ECCV 2018; Please refer to project webpage for videos -
http://www.cs.cmu.edu/~aayushb/Recycle-GA
Adversarial nets with perceptual losses for text-to-image synthesis
Recent approaches in generative adversarial networks (GANs) can automatically
synthesize realistic images from descriptive text. Despite the overall fair
quality, the generated images often expose visible flaws that lack structural
definition for an object of interest. In this paper, we aim to extend state of
the art for GAN-based text-to-image synthesis by improving perceptual quality
of generated images. Differentiated from previous work, our synthetic image
generator optimizes on perceptual loss functions that measure pixel, feature
activation, and texture differences against a natural image. We present
visually more compelling synthetic images of birds and flowers generated from
text descriptions in comparison to some of the most prominent existing work
Fader Networks: Manipulating Images by Sliding Attributes
This paper introduces a new encoder-decoder architecture that is trained to
reconstruct images by disentangling the salient information of the image and
the values of attributes directly in the latent space. As a result, after
training, our model can generate different realistic versions of an input image
by varying the attribute values. By using continuous attribute values, we can
choose how much a specific attribute is perceivable in the generated image.
This property could allow for applications where users can modify an image
using sliding knobs, like faders on a mixing console, to change the facial
expression of a portrait, or to update the color of some objects. Compared to
the state-of-the-art which mostly relies on training adversarial networks in
pixel space by altering attribute values at train time, our approach results in
much simpler training schemes and nicely scales to multiple attributes. We
present evidence that our model can significantly change the perceived value of
the attributes while preserving the naturalness of images.Comment: NIPS 201
Time-series Doppler images and surface differential rotation of the effectively-single rapidly-rotating K-giant KU Pegasi
According to most stellar dynamo theories, differential rotation (DR) plays a
crucial role for the generation of toroidal magnetic fields. Numerical models
predict surface differential rotation to be anti-solar for rapidly-rotating
giant stars, i.e., their surface angular velocity could increase with stellar
latitude. However, surface differential rotation has been derived only for a
handful of individual giant stars to date.
The spotted surface of the K-giant KU Pegasi is investigated in order to
detect its time evolution and quantify surface differential rotation.
We present altogether 11 Doppler images from spectroscopic data collected
with the robotic telescope STELLA between 2006--2011. All maps are obtained
with the surface reconstruction code iMap. Differential rotation is extracted
from these images by detecting systematic (latitude-dependent) spot
displacements. We apply a cross-correlation technique to find the best
differential rotation law.
The surface of KU Peg shows cool spots at all latitudes and one persistent
warm spot at high latitude. A small cool polar spot exists for most but not all
of the epochs. Re-identification of spots in at least two consecutive maps is
mostly possible only at mid and high latitudes and thus restricts the
differential-rotation determination mainly to these latitudes. Our
cross-correlation analysis reveals solar-like differential rotation with a
surface shear of , i.e., approximately five times weaker
than on the Sun. We also derive a more accurate and consistent set of stellar
parameters for KU Peg including a small Li abundance of ten times less than
solar.Comment: 13 pages, 12 figures, accepted for publication in A&
Making history: intentional capture of future memories
Lifelogging' technology makes it possible to amass digital data about every aspect of our everyday lives. Instead of focusing on such technical possibilities, here we investigate the way people compose long-term mnemonic representations of their lives. We asked 10 families to create a time capsule, a collection of objects used to trigger remembering in the distant future. Our results show that contrary to the lifelogging view, people are less interested in exhaustively digitally recording their past than in reconstructing it from carefully selected cues that are often physical objects. Time capsules were highly expressive and personal, many objects were made explicitly for inclusion, however with little object annotation. We use these findings to propose principles for designing technology that supports the active reconstruction of our future past
Learning Temporal Transformations From Time-Lapse Videos
Based on life-long observations of physical, chemical, and biologic phenomena
in the natural world, humans can often easily picture in their minds what an
object will look like in the future. But, what about computers? In this paper,
we learn computational models of object transformations from time-lapse videos.
In particular, we explore the use of generative models to create depictions of
objects at future times. These models explore several different prediction
tasks: generating a future state given a single depiction of an object,
generating a future state given two depictions of an object at different times,
and generating future states recursively in a recurrent framework. We provide
both qualitative and quantitative evaluations of the generated results, and
also conduct a human evaluation to compare variations of our models.Comment: ECCV201
- …