4,109 research outputs found

    The energy-critical defocusing NLS on T^3

    Full text link
    We prove global well-posedness in H^1(T^3) for the energy-critical defocusing NLS.Comment: 24 page

    Nonlinear cross Gramians and gradient systems

    Get PDF
    We study the notion of cross Gramians for non-linear gradient systems, using the characterization in terms of prolongation and gradient extension associated to the system. The cross Gramian is given for the variational system associated to the original nonlinear gradient system. We obtain linearization results that precisely correspond to the notion of a cross Gramian for symmetric linear systems. Furthermore, first steps towards relations with the singular value functions of the nonlinear Hankel operator are studied and yield promising results.

    On the particle paths and the stagnation points in small-amplitude deep-water waves

    Full text link
    In order to obtain quite precise information about the shape of the particle paths below small-amplitude gravity waves travelling on irrotational deep water, analytic solutions of the nonlinear differential equation system describing the particle motion are provided. All these solutions are not closed curves. Some particle trajectories are peakon-like, others can be expressed with the aid of the Jacobi elliptic functions or with the aid of the hyperelliptic functions. Remarks on the stagnation points of the small-amplitude irrotational deep-water waves are also made.Comment: to appear in J. Math. Fluid Mech. arXiv admin note: text overlap with arXiv:1106.382

    ContextVP: Fully Context-Aware Video Prediction

    Full text link
    Video prediction models based on convolutional networks, recurrent networks, and their combinations often result in blurry predictions. We identify an important contributing factor for imprecise predictions that has not been studied adequately in the literature: blind spots, i.e., lack of access to all relevant past information for accurately predicting the future. To address this issue, we introduce a fully context-aware architecture that captures the entire available past context for each pixel using Parallel Multi-Dimensional LSTM units and aggregates it using blending units. Our model outperforms a strong baseline network of 20 recurrent convolutional layers and yields state-of-the-art performance for next step prediction on three challenging real-world video datasets: Human 3.6M, Caltech Pedestrian, and UCF-101. Moreover, it does so with fewer parameters than several recently proposed models, and does not rely on deep convolutional networks, multi-scale architectures, separation of background and foreground modeling, motion flow learning, or adversarial training. These results highlight that full awareness of past context is of crucial importance for video prediction.Comment: 19 pages. ECCV 2018 oral presentation. Project webpage is at https://wonmin-byeon.github.io/publication/2018-ecc
    corecore