419 research outputs found

    Neural View-Interpolation for Sparse Light Field Video

    No full text
    We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution

    Short-time critical dynamics and universality on a two-dimensional Triangular Lattice

    Full text link
    Critical scaling and universality in short-time dynamics for spin models on a two-dimensional triangular lattice are investigated by using Monte Carlo simulation. Emphasis is placed on the dynamic evolution from fully ordered initialstates to show that universal scaling exists already in the short-time regime in form of power-law behavior of the magnetization and Binder cumulant. The results measured for the dynamic and static critical exponents, θ\theta, zz, β\beta and ν\nu, confirm explicitly that the Potts models on the triangular lattice and square lattice belong to the same universality class. Our critical scaling analysis strongly suggests that the simulation for the dynamic relaxation can be used to determine numerically the universality.Comment: LaTex, 11 pages and 10 figures, to be published in Physica

    Microscopic Non-Universality versus Macroscopic Universality in Algorithms for Critical Dynamics

    Full text link
    We study relaxation processes in spin systems near criticality after a quench from a high-temperature initial state. Special attention is paid to the stage where universal behavior, with increasing order parameter emerges from an early non-universal period. We compare various algorithms, lattice types, and updating schemes and find in each case the same universal behavior at macroscopic times, despite of surprising differences during the early non-universal stages.Comment: 9 pages, 3 figures, RevTeX, submitted to Phys. Rev. Let

    Monte Carlo Simulation of the Short-time Behaviour of the Dynamic XY Model

    Full text link
    Dynamic relaxation of the XY model quenched from a high temperature state to the critical temperature or below is investigated with Monte Carlo methods. When a non-zero initial magnetization is given, in the short-time regime of the dynamic evolution the critical initial increase of the magnetization is observed. The dynamic exponent θ\theta is directly determined. The results show that the exponent θ\theta varies with respect to the temperature. Furthermore, it is demonstrated that this initial increase of the magnetization is universal, i.e. independent of the microscopic details of the initial configurations and the algorithms.Comment: 14 pages with 5 figures in postscrip

    X-Ray Scattering at FeCo(001) Surfaces and the Crossover between Ordinary and Normal Transitions

    Full text link
    In a recent experiment by Krimmel et al. [PRL 78, 3880 (1997)], the critical behavior of FeCo near a (001) surface was studied by x-ray scattering. Here the experimental data are reanalyzed, taking into account recent theoretical results on order-parameter profiles in the crossover regime between ordinary and normal transitions. Excellent agreement between theoretical expectations and the experimental results is found.Comment: 9 pages, Latex, 1 PostScript figure, to be published in Phys.Rev.

    Scaling property of variational perturbation expansion for general anharmonic oscillator

    Full text link
    We prove a powerful scaling property for the extremality condition in the recently developed variational perturbation theory which converts divergent perturbation expansions into exponentially fast convergent ones. The proof is given for the energy eigenvalues of an anharmonic oscillator with an arbitrary xpx^p-potential. The scaling property greatly increases the accuracy of the results

    X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation

    Get PDF
    We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i.e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the input to that rendering as an implicit map, that for any view, time, or light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position with respect to view, time, illumination, etc.). Our X-Field representation is trained for one scene within minutes, leading to a compact set of trainable parameters and hence real-time navigation in view, time and illumination

    Transformation-aware Perceptual Image Metric

    Get PDF
    Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax

    Deep Shading: Convolutional Neural Networks for Screen-Space Shading

    No full text
    In computer vision, Convolutional Neural Networks (CNNs) have recently achieved new levels of performance for several inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen-space shading has recently increased the visual quality in interactive image synthesis, where per-pixel attributes such as positions, normals or reflectance of a virtual 3D scene are converted into RGB pixel appearance, enabling effects like ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading simulates all screen-space effects as well as arbitrary combinations thereof at competitive quality and speed while not being programmed by human experts but learned from example images

    {HDR} Denoising and Deblurring by Learning Spatio-temporal Distortion Model

    Get PDF
    We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise (DISTORTED->CLEAN) supervised by pairs of CLEAN and DISTORTED images. Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos. We suggest a method to overcome those two limitations. First, we learn a different function instead: CLEAN->DISTORTED, which generates samples containing correlated pixel noise, and row and column noise, as well as motion blur from a low number of CLEAN sensor readings. Second, as there is not enough CLEAN HDR video available, we devise a method to learn from LDR video in-stead. Our approach compares favorably to several strong baselines, and can boost existing methods when they are re-trained on our data. Combined with spatial and temporal super-resolution, it enables applications such as re-lighting with low noise or blur
    • …
    corecore