33,249 research outputs found

    Context-aware Synthesis for Video Frame Interpolation

    Get PDF
    Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.Comment: CVPR 2018, http://graphics.cs.pdx.edu/project/ctxsy

    Video Frame Interpolation via Adaptive Separable Convolution

    Get PDF
    Standard video frame interpolation methods first estimate optical flow between input frames and then synthesize an intermediate frame guided by motion. Recent approaches merge these two steps into a single convolution process by convolving input frames with spatially adaptive kernels that account for motion and re-sampling simultaneously. These methods require large kernels to handle large motion, which limits the number of pixels whose kernels can be estimated at once due to the large memory demand. To address this problem, this paper formulates frame interpolation as local separable convolution over input frames using pairs of 1D kernels. Compared to regular 2D kernels, the 1D kernels require significantly fewer parameters to be estimated. Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously. Since our method is able to estimate kernels and synthesizes the whole video frame at once, it allows for the incorporation of perceptual loss to train the neural network to produce visually pleasing frames. This deep neural network is trained end-to-end using widely available video data without any human annotation. Both qualitative and quantitative experiments show that our method provides a practical solution to high-quality video frame interpolation.Comment: ICCV 2017, http://graphics.cs.pdx.edu/project/sepconv

    Quasi-phase-matching of high-order-harmonic generation using multimode polarization beating

    Full text link
    The generalization of quasi-phase-matching using polarization beating and of multimode quasi-phase-matching (MMQPM) for the generation of high-order harmonics is explored, and a method for achieving polarization beating is proposed. If two (and in principle more) modes of a waveguide are excited, modulation of the intensity, phase, and/or polarization of the guided radiation will be achieved. By appropriately matching the period of this modulation to the coherence length, quasi-phase-matching of high-order-harmonic radiation generated by the guided wave can occur. We show that it is possible to achieve efficiencies with multimode quasi-phase-matching greater than the ideal square wave modulation. We present a Fourier treatment of QPM and use this to show that phase modulation, rather than amplitude modulation, plays the dominant role in the case of MMQPM. The experimental parameters and optimal conditions for this scheme are explored

    Unitary and non-unitary N=2N=2 minimal models

    Get PDF
    The unitary N=2N = 2 superconformal minimal models have a long history in string theory and mathematical physics, while their non-unitary (and logarithmic) cousins have recently attracted interest from mathematicians. Here, we give an efficient and uniform analysis of all these models as an application of a type of Schur-Weyl duality, as it pertains to the well-known Kazama-Suzuki coset construction. The results include straightforward classifications of the irreducible modules, branching rules, (super)characters and (Grothendieck) fusion rules.Comment: 32 page
    • …
    corecore