22,044 research outputs found

    Transport-Based Neural Style Transfer for Smoke Simulations

    Full text link
    Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional materials: http://www.byungsoo.me/project/neural-flow-styl

    Style Transfer and Extraction for the Handwritten Letters Using Deep Learning

    Full text link
    How can we learn, transfer and extract handwriting styles using deep neural networks? This paper explores these questions using a deep conditioned autoencoder on the IRON-OFF handwriting data-set. We perform three experiments that systematically explore the quality of our style extraction procedure. First, We compare our model to handwriting benchmarks using multidimensional performance metrics. Second, we explore the quality of style transfer, i.e. how the model performs on new, unseen writers. In both experiments, we improve the metrics of state of the art methods by a large margin. Lastly, we analyze the latent space of our model, and we see that it separates consistently writing styles.Comment: Accepted in ICAART 201

    Nilpotent normal form for divergence-free vector fields and volume-preserving maps

    Full text link
    We study the normal forms for incompressible flows and maps in the neighborhood of an equilibrium or fixed point with a triple eigenvalue. We prove that when a divergence free vector field in R3\mathbb{R}^3 has nilpotent linearization with maximal Jordan block then, to arbitrary degree, coordinates can be chosen so that the nonlinear terms occur as a single function of two variables in the third component. The analogue for volume-preserving diffeomorphisms gives an optimal normal form in which the truncation of the normal form at any degree gives an exactly volume-preserving map whose inverse is also polynomial inverse with the same degree.Comment: laTeX, 20 pages, 1 figur

    Nilpotent normal form for divergence-free vector fields and volume-preserving maps

    Get PDF
    We study the normal forms for incompressible flows and maps in the neighborhood of an equilibrium or fixed point with a triple eigenvalue. We prove that when a divergence free vector field in R3\mathbb{R}^3 has nilpotent linearization with maximal Jordan block then, to arbitrary degree, coordinates can be chosen so that the nonlinear terms occur as a single function of two variables in the third component. The analogue for volume-preserving diffeomorphisms gives an optimal normal form in which the truncation of the normal form at any degree gives an exactly volume-preserving map whose inverse is also polynomial inverse with the same degree.Comment: laTeX, 20 pages, 1 figur

    Reversible GANs for Memory-efficient Image-to-Image Translation

    Full text link
    The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets at near constant memory budget
    • …
    corecore