8,400 research outputs found
Recycle-GAN: Unsupervised Video Retargeting
We introduce a data-driven approach for unsupervised video retargeting that
translates content from one domain to another while preserving the style native
to a domain, i.e., if contents of John Oliver's speech were to be transferred
to Stephen Colbert, then the generated content/speech should be in Stephen
Colbert's style. Our approach combines both spatial and temporal information
along with adversarial losses for content translation and style preservation.
In this work, we first study the advantages of using spatiotemporal constraints
over spatial constraints for effective retargeting. We then demonstrate the
proposed approach for the problems where information in both space and time
matters such as face-to-face translation, flower-to-flower, wind and cloud
synthesis, sunrise and sunset.Comment: ECCV 2018; Please refer to project webpage for videos -
http://www.cs.cmu.edu/~aayushb/Recycle-GA
Relaxing the Parity Conditions of Asymptotically Flat Gravity
Four-dimensional asymptotically flat spacetimes at spatial infinity are
defined from first principles without imposing parity conditions or
restrictions on the Weyl tensor. The Einstein-Hilbert action is shown to be a
correct variational principle when it is supplemented by an anomalous
counter-term which breaks asymptotic translation, supertranslation and
logarithmic translation invariance. Poincar\'e transformations as well as
supertranslations and logarithmic translations are associated with finite and
conserved charges which represent the asymptotic symmetry group. Lorentz
charges as well as logarithmic translations transform anomalously under a
change of regulator. Lorentz charges are generally non-linear functionals of
the asymptotic fields but reduce to well-known linear expressions when parity
conditions hold. We also define a covariant phase space of asymptotically flat
spacetimes with parity conditions but without restrictions on the Weyl tensor.
In this phase space, the anomaly plays classically no dynamical role.
Supertranslations are pure gauge and the asymptotic symmetry group is the
expected Poincar\'e group.Comment: Four equations corrected. Two references adde
Computing motion in the primate's visual system
Computing motion on the basis of the time-varying image intensity is a difficult problem for both artificial and biological vision systems. We will show how one well-known gradient-based computer algorithm for estimating visual motion can be implemented within the primate's visual system. This relaxation algorithm computes the optical flow field by minimizing a variational functional of a form commonly encountered in early vision, and is performed in two steps. In the first stage, local motion is computed, while in the second stage spatial integration occurs. Neurons in the second stage represent the optical flow field via a population-coding scheme, such that the vector sum of all neurons at each location codes for the direction and magnitude of the velocity at that location. The resulting network maps onto the magnocellular pathway of the primate visual system, in particular onto cells in the primary visual cortex (V1) as well as onto cells in the middle temporal area (MT). Our algorithm mimics a number of psychophysical phenomena and illusions (perception of coherent plaids, motion capture, motion coherence) as well as electrophysiological recordings. Thus, a single unifying principle âthe final optical flow should be as smooth as possibleâ (except at isolated motion discontinuities) explains a large number of phenomena and links single-cell behavior with perception and computational theory
Optical Flow in Mostly Rigid Scenes
The optical flow of natural scenes is a combination of the motion of the
observer and the independent motion of objects. Existing algorithms typically
focus on either recovering motion and structure under the assumption of a
purely static world or optical flow for general unconstrained scenes. We
combine these approaches in an optical flow algorithm that estimates an
explicit segmentation of moving objects from appearance and physical
constraints. In static regions we take advantage of strong constraints to
jointly estimate the camera motion and the 3D structure of the scene over
multiple frames. This allows us to also regularize the structure instead of the
motion. Our formulation uses a Plane+Parallax framework, which works even under
small baselines, and reduces the motion estimation to a one-dimensional search
problem, resulting in more accurate estimation. In moving regions the flow is
treated as unconstrained, and computed with an existing optical flow method.
The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art
results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201
- âŚ