92 research outputs found

    2.5D cartoon models

    Get PDF
    We present a way to bring cartoon objects and characters into the third dimension, by giving them the ability to rotate and be viewed from any angle. We show how 2D vector art drawings of a cartoon from different views can be used to generate a novel structure, the 2.5D cartoon model, which can be used to simulate 3D rotations and generate plausible renderings of the cartoon from any view. 2.5D cartoon models are easier to create than a full 3D model, and retain the 2D nature of hand-drawn vector art, supporting a wide range of stylizations that need not correspond to any real 3D shape.MathWorks, Inc. (Fellowship

    Ink-and-Ray: Bas-Relief Meshes for Adding Global Illumination Effects to Hand-Drawn Characters

    Get PDF
    We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D handdrawn look-and-feel. We demonstrate our approach on a varied set of handdrawn images and animations, showing that even in comparison to ground truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding

    Sketch-based character prototyping by deformation

    Get PDF
    Master'sMASTER OF SCIENC

    User Directed Multi-view-stereo

    Full text link
    Abstract. Depth reconstruction from video footage and image collec-tions is a fundamental part of many modelling and image-based render-ing applications. However real-world scenes often contain limited texture information, repeated elements and other ambiguities which remain chal-lenging for fully automatic algorithms. This paper presents a technique that combines intuitive user constraints with dense multi-view stereo reconstruction. By providing annotations in the form of simple paint strokes, a user can guide a multi-view stereo algorithm and avoid com-mon failure cases. We show how smoothness, discontinuity and depth ordering constraints can be incorporated directly into a variational opti-mization framework for multi-view stereo. Our method avoids the need for heuristic approaches that edit a depth-map in a sequential process, and avoids requiring the user to accurately segment object boundaries or to directly model geometry. We show how with a small amount of intuitive input, a user may create improved depth maps in challenging cases for multi-view-stereo.

    Interaktive Raumzeitrekonstruktion in der Computergraphik

    Get PDF
    High-quality dense spatial and/or temporal reconstructions and correspondence maps from camera images, be it optical flow, stereo or scene flow, are an essential prerequisite for a multitude of computer vision and graphics tasks, e.g. scene editing or view interpolation in visual media production. Due to the ill-posed nature of the estimation problem in typical setups (i.e. limited amount of cameras, limited frame rate), automated estimation approaches are prone to erroneous correspondences and subsequent quality degradation in many non-trivial cases such as occlusions, ambiguous movements, long displacements, or low texture. While improving estimation algorithms is one obvious possible direction, this thesis complementarily concerns itself with creating intuitive, high-level user interactions that lead to improved correspondence maps and scene reconstructions. Where visually convincing results are essential, rendering artifacts resulting from estimation errors are usually repaired by hand with image editing tools, which is time consuming and therefore costly. My new user interactions, which integrate human scene recognition capabilities to guide a semi-automatic correspondence or scene reconstruction algorithm, save considerable effort and enable faster and more efficient production of visually convincing rendered images.Raumzeit-Rekonstruktion in Form von dichten räumlichen und/oder zeitlichen Korrespondenzen zwischen Kamerabildern, sei es optischer Fluss, Stereo oder Szenenfluss, ist eine wesentliche Voraussetzung für eine Vielzahl von Aufgaben in der Computergraphik, zum Beispiel zum Editieren von Szenen oder Bildinterpolation. Da sowohl die Anzahl der Kameras als auch die Bildfrequenz begrenzt sind, ist das Rekonstruktionsproblem unterbestimmt, weswegen automatisierte Schätzungen häufig fehlerhafte Korrespondenzen für nichttriviale Fälle wie Verdeckungen, mehrdeutige oder große Bewegungen, oder einheitliche Texturen enthalten; jede Bildsynthese basierend auf den partiell falschen Schätzungen muß daher Qualitätseinbußen in Kauf nehmen. Man kann nun zum einen versuchen, die Schätzungsalgorithmen zu verbessern. Komplementär dazu kann man möglichst effiziente Interaktionsmöglichkeiten entwickeln, die die Qualität der Rekonstruktion drastisch verbessern. Dies ist das Ziel dieser Dissertation. Für visuell überzeugende Resultate müssen Bildsynthesefehler bislang manuell in einem aufwändigen Nachbearbeitungsschritt mit Hilfe von Bildbearbeitungswerkzeugen korrigiert werden. Meine neuen Benutzerinteraktionen, welche menschliches Szenenverständnis in halbautomatische Algorithmen integrieren, verringern den Nachbearbeitungsaufwand beträchtlich und ermöglichen so eine schnellere und effizientere Produktion qualitativ hochwertiger synthetisierter Bilder

    Generation of 3D characters from existing cartoons and a unified pipeline for animation and video games.

    Get PDF
    Despite the remarkable growth of 3D animation in the last twenty years, 2D is still popular today and often employed for both films and video games. In fact, 2D offers important economic and artistic advantages to production. In this thesis has been introduced an innovative system to generate 3D character from 2D cartoons, while maintaining important 2D features in 3D as well. However, handling 2D characters and animation in a 3D environment is not a trivial task, as they do not possess any depth information. Three different solutions have been proposed in this thesis. A 2.5D modelling method, which exploits billboarding, parallax scrolling and 2D shape interpolation to simulate the depth between the different body parts of the characters. Two additional full 3D solution have been presented. One based on inflation and supported by a surface registration method, and one that produces more accurate approximations by using information from the side views to solve an optimization problem. These methods have been introduced into a new unified pipeline that involves a game engine, and that could be used for animation and video games production. A unified pipeline introduces several benefits to animation production for either 2D and 3D content. On one hand, assets can be shared for different productions and media. On the other hand, real-time rendering for animated films allows immediate previews of the scenes and offers artists a way to experiment more during the making of a scene

    Single View Modeling and View Synthesis

    Get PDF
    This thesis develops new algorithms to produce 3D content from a single camera. Today, amateurs can use hand-held camcorders to capture and display the 3D world in 2D, using mature technologies. However, there is always a strong desire to record and re-explore the 3D world in 3D. To achieve this goal, current approaches usually make use of a camera array, which suffers from tedious setup and calibration processes, as well as lack of portability, limiting its application to lab experiments. In this thesis, I try to produce the 3D contents using a single camera, making it as simple as shooting pictures. It requires a new front end capturing device rather than a regular camcorder, as well as more sophisticated algorithms. First, in order to capture the highly detailed object surfaces, I designed and developed a depth camera based on a novel technique called light fall-off stereo (LFS). The LFS depth camera outputs color+depth image sequences and achieves 30 fps, which is necessary for capturing dynamic scenes. Based on the output color+depth images, I developed a new approach that builds 3D models of dynamic and deformable objects. While the camera can only capture part of a whole object at any instance, partial surfaces are assembled together to form a complete 3D model by a novel warping algorithm. Inspired by the success of single view 3D modeling, I extended my exploration into 2D-3D video conversion that does not utilize a depth camera. I developed a semi-automatic system that converts monocular videos into stereoscopic videos, via view synthesis. It combines motion analysis with user interaction, aiming to transfer as much depth inferring work from the user to the computer. I developed two new methods that analyze the optical flow in order to provide additional qualitative depth constraints. The automatically extracted depth information is presented in the user interface to assist with user labeling work. In this thesis, I developed new algorithms to produce 3D contents from a single camera. Depending on the input data, my algorithm can build high fidelity 3D models for dynamic and deformable objects if depth maps are provided. Otherwise, it can turn the video clips into stereoscopic video

    Adding Depth to Cartoons Using Sparse Depth (In)equalities

    No full text
    This paper presents a novel interactive approach for adding depth information into hand-drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop-ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline including applications such as enhancement, registration, composition, 3D modelling and stereoscopic display

    Gender dysphoria and autism spectrum condition: the development of gender identity

    Get PDF
    There is a paucity of research into how gender identity develops, particularly in those with gender dysphoria (GD) and autism spectrum condition (ASC). This study aimed to explore how gender identity develops in assigned males with GD and ASC. Thematic analysis was used to analyse data from eight interviews. Seven main themes were identified: narrative, relational, paradoxical, experience, media, identity, and ASC. Findings suggested participants equate gender identity with biological sex, therefore not open to external influence. Isolation may drive their desire to transition genders, and the media was used to understand GD. Participants related to the genders through gender stereotypical interests, and it appeared their gender identity had been influenced by their experiences of cismales and females. However, there appeared to be a paradox between how participants perceived females, and what it meant for them to be female. Participants identified ASC largely did not affect their gender identity development
    corecore