747 research outputs found

    Transport-Based Neural Style Transfer for Smoke Simulations

    Full text link
    Artistically controlling fluids has always been a challenging task. Optimization techniques rely on approximating simulation states towards target velocity or density field configurations, which are often handcrafted by artists to indirectly control smoke dynamics. Patch synthesis techniques transfer image textures or simulation features to a target flow field. However, these are either limited to adding structural patterns or augmenting coarse flows with turbulent structures, and hence cannot capture the full spectrum of different styles and semantically complex structures. In this paper, we propose the first Transport-based Neural Style Transfer (TNST) algorithm for volumetric smoke data. Our method is able to transfer features from natural images to smoke simulations, enabling general content-aware manipulations ranging from simple patterns to intricate motifs. The proposed algorithm is physically inspired, since it computes the density transport from a source input smoke to a desired target configuration. Our transport-based approach allows direct control over the divergence of the stylization velocity field by optimizing incompressible and irrotational potentials that transport smoke towards stylization. Temporal consistency is ensured by transporting and aligning subsequent stylized velocities, and 3D reconstructions are computed by seamlessly merging stylizations from different camera viewpoints.Comment: ACM Transaction on Graphics (SIGGRAPH ASIA 2019), additional materials: http://www.byungsoo.me/project/neural-flow-styl

    Lagrangian Neural Style Transfer for Fluids

    Full text link
    Artistically controlling the shape, motion and appearance of fluid simulations pose major challenges in visual effects production. In this paper, we present a neural style transfer approach from images to 3D fluids formulated in a Lagrangian viewpoint. Using particles for style transfer has unique benefits compared to grid-based techniques. Attributes are stored on the particles and hence are trivially transported by the particle motion. This intrinsically ensures temporal consistency of the optimized stylized structure and notably improves the resulting quality. Simultaneously, the expensive, recursive alignment of stylization velocity fields of grid approaches is unnecessary, reducing the computation time to less than an hour and rendering neural flow stylization practical in production settings. Moreover, the Lagrangian representation improves artistic control as it allows for multi-fluid stylization and consistent color transfer from images, and the generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials: http://www.byungsoo.me/project/lnst/index.htm

    Blending liquids

    Get PDF
    We present a method for smoothly blending between existing liquid animations. We introduce a semi-automatic method for matching two existing liquid animations, which we use to create new fluid motion that plausibly interpolates the input. Our contributions include a new space-time non-rigid iterative closest point algorithm that incorporates user guidance, a subsampling technique for efficient registration of meshes with millions of vertices, and a fast surface extraction algorithm that produces 3D triangle meshes from a 4D space-time surface. Our technique can be used to instantly create hundreds of new simulations, or to interactively explore complex parameter spaces. Our method is guaranteed to produce output that does not deviate from the input animations, and it generalizes to multiple dimensions. Because our method runs at interactive rates after the initial precomputation step, it has potential applications in games and training simulations

    Bidirectionally Deformable Motion Modulation For Video-based Human Pose Transfer

    Full text link
    Video-based human pose transfer is a video-to-video generation task that animates a plain source human image based on a series of target human poses. Considering the difficulties in transferring highly structural patterns on the garments and discontinuous poses, existing methods often generate unsatisfactory results such as distorted textures and flickering artifacts. To address these issues, we propose a novel Deformable Motion Modulation (DMM) that utilizes geometric kernel offset with adaptive weight modulation to simultaneously perform feature alignment and style transfer. Different from normal style modulation used in style transfer, the proposed modulation mechanism adaptively reconstructs smoothed frames from style codes according to the object shape through an irregular receptive field of view. To enhance the spatio-temporal consistency, we leverage bidirectional propagation to extract the hidden motion information from a warped image sequence generated by noisy poses. The proposed feature propagation significantly enhances the motion prediction ability by forward and backward propagation. Both quantitative and qualitative experimental results demonstrate superiority over the state-of-the-arts in terms of image fidelity and visual continuity. The source code is publicly available at github.com/rocketappslab/bdmm.Comment: ICCV 202

    A Hip-Hop Joint: Thinking Architecturally About Blackness

    Get PDF
    “A Hip-Hop Joint: Thinking Architecturally About Blackness” beings by recognizing that hip-hop visual culture’s rapid global expansion over the last four decades complicates its lasting connection to blackness. Instead of arguing that blackness is the content of contemporary hip-hop, this project considers blackness as the aesthetic that coheres the diffuse genre. Thus, blackness serves a distinctly architectural function in hip-hop visual culture—it is the architectonic logic of the genre. Therefore, this project illustrates the value of alternative definitions of blackness; specifically, this dissertation approaches blackness as a distinct set of spatial relations that can be observed in the many places and spaces hip-hop is produced and consumed. “A Hip-Hop Joint” argues blackness and hip-hop exist in a recursive loop: blackness generates the spatial organization of hip-hop and hip-hop is so racially charged that it produces blackness. As a result, hip-hop images can serve as the site for unexpected encounters with blackness—specifically, visualizing blackness in spaces that are not occupied by actual black bodies. Because visual culture organizes space through the positioning of the black body, this dissertation argues hip-hop images that defy the presumed appearance and visibility of blackness are not only capable of reconfiguring image relations, but also the aesthetics of anti-blackness. This project relies on black studies, visual culture studies, and architectural theory. The visual objects analyzed include: music videos directed by Hype Williams, Beyoncé’s “Formation,” WorldStarHipHop.com, William Pope.L’s “Claim,” the trailer for Apollo Brown’s Thirty Eight album, and “Until the Quiet Comes” directed by Kahlil Joseph

    The flesh and blood of embodied understanding

    Full text link

    VELUM: A 3D Puzzle/Exploration Game Designed Using Crowdsourced AI Facial Analysis

    Get PDF
    Velum is a first-person 3D puzzle/exploration game set in a timeless version of the Boston Public Garden. The project’s narrative framework and aesthetics are based on one of the Garden’s most prominent features, the Ether Monument, which commemorates the 1846 discovery of diethyl ether’s effectiveness as a medical anesthetic. A sequence of nine abstract challenges is rewarded by a progressive revelation of the player’s mysterious identity and purpose. The puzzle design was informed by the use of crowdsourced playtesting involving 300+ volunteers, combining standard data telemetry with AI-based facial image analysis capable of mapping player emotions to gameplay events
    • …
    corecore