754 research outputs found

    2.5D cartoon models

    Get PDF
    We present a way to bring cartoon objects and characters into the third dimension, by giving them the ability to rotate and be viewed from any angle. We show how 2D vector art drawings of a cartoon from different views can be used to generate a novel structure, the 2.5D cartoon model, which can be used to simulate 3D rotations and generate plausible renderings of the cartoon from any view. 2.5D cartoon models are easier to create than a full 3D model, and retain the 2D nature of hand-drawn vector art, supporting a wide range of stylizations that need not correspond to any real 3D shape.MathWorks, Inc. (Fellowship

    Electric current in flares ribbons: observations and 3D standard model

    Full text link
    We present for the first time the evolution of the photospheric electric currents during an eruptive X-class flare, accurately predicted by the standard 3D flare model. We analyze this evolution for the February 15, 2011 flare using HMI/SDO magnetic observations and find that localized currents in \J-shaped ribbons increase to double their pre-flare intensity. Our 3D flare model, developed with the OHM code, suggests that these current ribbons, which develop at the location of EUV brightenings seen with AIA imagery, are driven by the collapse of the flare's coronal current layer. These findings of increased currents restricted in localized ribbons are consistent with the overall free energy decrease during a flare, and the shape of these ribbons also give an indication on how much twisted the erupting flux rope is. Finally, this study further enhances the close correspondence obtained between the theoretical predictions of the standard 3D model and flare observations indicating that the main key physical elements are incorporated in the model.Comment: 12 pages, 7 figure

    3D performance capture for facial animation

    Get PDF
    This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence

    PAI-Diffusion: Constructing and Serving a Family of Open Chinese Diffusion Models for Text-to-image Synthesis on the Cloud

    Full text link
    Text-to-image synthesis for the Chinese language poses unique challenges due to its large vocabulary size, and intricate character relationships. While existing diffusion models have shown promise in generating images from textual descriptions, they often neglect domain-specific contexts and lack robustness in handling the Chinese language. This paper introduces PAI-Diffusion, a comprehensive framework that addresses these limitations. PAI-Diffusion incorporates both general and domain-specific Chinese diffusion models, enabling the generation of contextually relevant images. It explores the potential of using LoRA and ControlNet for fine-grained image style transfer and image editing, empowering users with enhanced control over image generation. Moreover, PAI-Diffusion seamlessly integrates with Alibaba Cloud's Machine Learning Platform for AI, providing accessible and scalable solutions. All the Chinese diffusion model checkpoints, LoRAs, and ControlNets, including domain-specific ones, are publicly available. A user-friendly Chinese WebUI and the diffusers-api elastic inference toolkit, also open-sourced, further facilitate the easy deployment of PAI-Diffusion models in various environments, making it a valuable resource for Chinese text-to-image synthesis

    Ink-and-Ray: Bas-Relief Meshes for Adding Global Illumination Effects to Hand-Drawn Characters

    Get PDF
    We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D handdrawn look-and-feel. We demonstrate our approach on a varied set of handdrawn images and animations, showing that even in comparison to ground truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding

    Disc wind models for FU Ori objects

    Full text link
    We present disc wind models aimed at reproducing the main features of the strong Na I resonance line P-Cygni profiles in the rapidly-accreting pre-main sequence FU Ori objects. We conducted Monte Carlo radiative transfer simulations for a standard magnetocentrifugally driven wind (MHD) model and our own "Genwind" models, which allows for a more flexible wind parameterisation. We find that the fiducial MHD wind and similar Genwind models, which have flows emerging outward from the inner disc edge, and thus have polar cavities with no absorbing gas, cannot reproduce the deep, wide Na I absorption lines in FU Ori objects viewed at low inclination. We find that it is necessary to include an "inner wind" to fill this polar cavity to reproduce observations. In addition, our models assuming pure scattering source functions in the Sobolev approximation at intermediate viewing angles (30i6030^{\circ} \lesssim i \lesssim 60^{\circ}) do not yield sufficiently deep line profiles. Assuming complete absorption yields better agreement with observations, but simple estimates strongly suggest that pure scattering should be a much better approximation. The discrepancy may indicate that the Sobolev approximation is not applicable, possibly due to turbulence or non-monotonic velocity fields; there is some observational evidence for the latter. Our results provide guidance for future attempts to constrain FU Ori wind properties using full MHD wind simulations, by pointing to the importance of the boundary conditions necessary to give rise to an inner wind, and by suggesting that the winds must be turbulent to produce sufficiently deep line profiles.Comment: 12 pages, 17 figures, accepted for publication in MNRA
    corecore