28 research outputs found

    Text-guided Image-and-Shape Editing and Generation: A Short Survey

    Full text link
    Image and shape editing are ubiquitous among digital artworks. Graphics algorithms facilitate artists and designers to achieve desired editing intents without going through manually tedious retouching. In the recent advance of machine learning, artists' editing intents can even be driven by text, using a variety of well-trained neural networks. They have seen to be receiving an extensive success on such as generating photorealistic images, artworks and human poses, stylizing meshes from text, or auto-completion given image and shape priors. In this short survey, we provide an overview over 50 papers on state-of-the-art (text-guided) image-and-shape generation techniques. We start with an overview on recent editing algorithms in the introduction. Then, we provide a comprehensive review on text-guided editing techniques for 2D and 3D independently, where each of its sub-section begins with a brief background introduction. We also contextualize editing algorithms under recent implicit neural representations. Finally, we conclude the survey with the discussion over existing methods and potential research ideas.Comment: 10 page

    ODE-Driven Sketch-Based Organic Modelling

    Get PDF
    How to efficiently create 3D models from 2D sketches is an important problem. In this paper we propose a sketch-based and ordinary differential equation (ODE) driven modelling technique to tackle this problem. We first generate 2D silhouette contours of a 3D model. Then, we select proper primitives for each of the corresponding silhouette contours. After that, we develop an ODE-driven and sketch-guided deformation method. It uses ODE-based deformations to deform the primitives to exactly match the generated 2D silhouette contours in one view plane. Our experiment demonstrates that the proposed approach can create 3D models from 2D silhouette contours easily and efficiently

    Simulation of perspective by nonlinear transformations

    Get PDF
    A feature of the brain processing the visualization of objects is such that objects that are much farther away from the eye look smaller than closer objects to the eye. We show that a family of nonlinear transformations, also to be called compactifications, simulate qualitatively this property of keeping objects in perspective. These transformations project objects in a plane on a spherical shell. It is shown then that an observer located at a fixed point on the axis of the sphere visualizes the projected objects on the sphere in perspective. Namely, that objects that are farther away from the observation point seem smaller. Examples are provided. This is a departure from the traditional approaches using linearity and projections of objects from one plane into another plane

    Abstract Controlled-Topology Filtering

    No full text
    Many applications require the extraction of isolines and isosurfaces from scalar functions defined on regular grids. These scalar functions may have many different origins: from MRI and CT scan data to terrain data or results of a simulation. As a result of noise and other artifacts, curves and surfaces obtained by standard extraction algorithms often suffer from topological irregularities and geometric noise. While it is possible to remove topological and geometric noise as a post-processing step, in the case when a large number of isolines are of interest there is a considerable advantage in filtering the scalar function directly. While most smoothing filters result in gradual simplification of the topological structure of contours, new topological features typically emerge and disappear during the smoothing process. In this paper, we describe an algorithm for filtering functions defined on regular 2D grids with controlled topology changes, which ensures that the topological structure of the set of contour lines of the function is progressively simplified

    Structured Annotations for 2D-to-3D Modeling

    No full text
    We present a system for 3D modeling of free-form surfaces from 2D sketches. Our system frees users to create 2D sketches from arbitrary angles using their preferred tool, which may include pencil and paper. A 3D model is created by placing primitives and annotations on the 2D image. Our primitives are based on commonly used sketching conventions and allow users to maintain a single view of the model. This eliminates the frequent view changes inherent to existing 3D modeling tools, both traditional and sketchbased, and enables users to match input to the 2D guide image. Our annotations—same-lengths and angles, alignment, mirror symmetry, and connection curves—allow the user to communicate higherlevel semantic information; through them our system builds a consistent model even in cases where the original image is inconsistent. We present the results of a user study comparing our approach to a conventional “sketch-rotate-sketch ” workflow
    corecore