666 research outputs found
Neural View-Interpolation for Sparse Light Field Video
We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
Long-wavelength excitations of Higgs condensates
Quite independently of the Goldstone phenomenon, recent lattice data suggest
the existence of gap-less modes in the spontaneously broken phase of a theory. This result is a direct consequence of the quantum nature of
the `Higgs condensate' that cannot be treated as a purely classical c-number
field.Comment: 6 page
Short-time critical dynamics and universality on a two-dimensional Triangular Lattice
Critical scaling and universality in short-time dynamics for spin models on a
two-dimensional triangular lattice are investigated by using Monte Carlo
simulation. Emphasis is placed on the dynamic evolution from fully ordered
initialstates to show that universal scaling exists already in the short-time
regime in form of power-law behavior of the magnetization and Binder cumulant.
The results measured for the dynamic and static critical exponents, ,
, and , confirm explicitly that the Potts models on the
triangular lattice and square lattice belong to the same universality class.
Our critical scaling analysis strongly suggests that the simulation for the
dynamic relaxation can be used to determine numerically the universality.Comment: LaTex, 11 pages and 10 figures, to be published in Physica
Deep Appearance Maps
We propose a deep representation of appearance, i. e. the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have used deep learning to extract classic appearance representations relating to reflectance model parameters (e. g. Phong) or illumination (e. g. HDR environment maps). We suggest to directly represent appearance itself as a network we call a deep appearance map (DAM). This is a 4D generalization over 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we generalize this to an appearance estimation-and-segmentation task, where we map from an image showing multiple materials to multiple networks reproducing their appearance, as well as per-pixel segmentation
First lattice evidence for a non-trivial renormalization of the Higgs condensate
General arguments related to ``triviality'' predict that, in the broken phase
of theory, the condensate re-scales by a factor
$Z_{\phi}$ different from the conventional wavefunction-renormalization factor,
$Z_{prop}$. Using a lattice simulation in the Ising limit we measure
$Z_{\phi}=m^2 \chi$ from the physical mass and susceptibility and $Z_{prop}$
from the residue of the shifted-field propagator. We find that the two $Z$'s
differ, with the difference increasing rapidly as the continuum limit is
approached. Since $Z_{\phi}$ affects the relation of to the Fermi
constant it can sizeably affect the present bounds on the Higgs mass.Comment: 10 pages, 3 figures, 1 table, Latex2
Monte Carlo Simulation of the Short-time Behaviour of the Dynamic XY Model
Dynamic relaxation of the XY model quenched from a high temperature state to
the critical temperature or below is investigated with Monte Carlo methods.
When a non-zero initial magnetization is given, in the short-time regime of the
dynamic evolution the critical initial increase of the magnetization is
observed. The dynamic exponent is directly determined. The results
show that the exponent varies with respect to the temperature.
Furthermore, it is demonstrated that this initial increase of the magnetization
is universal, i.e. independent of the microscopic details of the initial
configurations and the algorithms.Comment: 14 pages with 5 figures in postscrip
Deep Appearance Maps
We propose a deep representation of appearance, i.e. the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have used deep learning to extract classic appearance representations relating to reflectance model parameters (e.g. Phong) or illumination (e.g. HDR environment maps). We suggest to directly represent appearance itself as a network we call a deep appearance map (DAM). This is a 4D generalization over 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showing multiple materials to multiple deep appearance maps
X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation
We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i.e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the input to that rendering as an implicit map, that for any view, time, or light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position with respect to view, time, illumination, etc.). Our X-Field representation is trained for one scene within minutes, leading to a compact set of trainable parameters and hence real-time navigation in view, time and illumination
Resonating Experiences of Self and Others enabled by a Tangible Somaesthetic Design
Digitalization is penetrating every aspect of everyday life including a
human's heart beating, which can easily be sensed by wearable sensors and
displayed for others to see, feel, and potentially "bodily resonate" with.
Previous work in studying human interactions and interaction designs with
physiological data, such as a heart's pulse rate, have argued that feeding it
back to the users may, for example support users' mindfulness and
self-awareness during various everyday activities and ultimately support their
wellbeing. Inspired by Somaesthetics as a discipline, which focuses on an
appreciation of the living body's role in all our experiences, we designed and
explored mobile tangible heart beat displays, which enable rich forms of bodily
experiencing oneself and others in social proximity. In this paper, we first
report on the design process of tangible heart displays and then present
results of a field study with 30 pairs of participants. Participants were asked
to use the tangible heart displays during watching movies together and report
their experience in three different heart display conditions (i.e., displaying
their own heart beat, their partner's heart beat, and watching a movie without
a heart display). We found, for example that participants reported significant
effects in experiencing sensory immersion when they felt their own heart beats
compared to the condition without any heart beat display, and that feeling
their partner's heart beats resulted in significant effects on social
experience. We refer to resonance theory to discuss the results, highlighting
the potential of how ubiquitous technology could utilize physiological data to
provide resonance in a modern society facing social acceleration.Comment: 18 page
- …