299 research outputs found

    Sketch-based character prototyping by deformation

    Get PDF
    Master'sMASTER OF SCIENC

    Stereoscopic Sketchpad: 3D Digital Ink

    Get PDF
    --Context-- This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing. --Background-- When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed. While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based. --Method-- As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times. One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines. --Results-- The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame. The focus of this research was on how a 3D sketching package would go about storing and accessing the digital ink. This is just a basis for further research in this area and many issues touched upon in this paper will require a more in depth analysis. The primary area of this future research would be the creation of an effective user interface and the introduction of regular sketching package features such as the saving and loading of images

    Embodied Interactions for Spatial Design Ideation: Symbolic, Geometric, and Tangible Approaches

    Get PDF
    Computer interfaces are evolving from mere aids for number crunching into active partners in creative processes such as art and design. This is, to a great extent, the result of mass availability of new interaction technology such as depth sensing, sensor integration in mobile devices, and increasing computational power. We are now witnessing the emergence of maker culture that can elevate art and design beyond the purview of enterprises and professionals such as trained engineers and artists. Materializing this transformation is not trivial; everyone has ideas but only a select few can bring them to reality. The challenge is the recognition and the subsequent interpretation of human actions into design intent

    Interactions gestuelles multi-point et géométrie déformable pour l’édition 3D sur écran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.Malgré les progrès en capture d’objets réels et en génération procédurale, la création de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thèse propose d’exploiter les nouvelles technologies tactiles (écrans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et déformer des objets 3D.En premier lieu, nous étudions les possibilité et les limitations gestuelles de la main et des doigts lors d’une interaction sur écran tactile afin de découvrir quels gestes semblent les plus adaptés à l’édition des environnements et des objets 3D. En particulier, nous évaluons le nombre de degré de liberté efficaces d’une main humaine lorsque son geste est contraint à une surface plane. Nous proposons également une nouvelle méthode d’analyse gestuelle par phases permettant d’identifier en temps réel les mouvements clés de la main et des doigts. Ces résultats, combinés à plusieurs études utilisateur spécifiques, débouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de caméra), mais aussi placement, rotation et mise à l’échelle des objets. Ce patron est étendudans un second temps aux déformations complexes (ajout et suppression de matière ainsi que courbure ou torsion des objets, avec contrôle de la localité). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou déformation) et des tâches correspondantes est automatiquement géré par le système en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous étendons cette interface pour y intégrer des déformations plus complexe à travers le transfert de vêtements d’un personnage à un autre, qui est étendu pour permettre la déformation interactive du vêtement lorsque le personnage qui le porte est déformé par interaction tactile

    Interactive freeform editing techniques for large-scale, multiresolution level set models

    Get PDF
    Level set methods provide a volumetric implicit surface representation with automatic smooth blending properties and no self-intersections. They can handle arbitrary topology changes easily, and the volumetric implicit representation does not require the surface to be re-adjusted after extreme deformations. Even though they have found some use in movie productions and some medical applications, level set models are not highly utilized in either special effects industry or medical science. Lack of interactive modeling tools makes working with level set models difficult for people in these application areas.This dissertation describes techniques and algorithms for interactive freeform editing of large-scale, multiresolution level set models. Algorithms are developed to map intuitive user interactions into level set speed functions producing specific, desired surface movements. Data structures for efficient representation of very high resolution volume datasets and associated algorithms for rapid access and processing of the information within the data structures are explained. A hierarchical, multiresolution representation of level set models that allows for rapid decomposition and reconstruction of the complete full-resolution model is created for an editing framework that allows level-of-detail editing. We have developed a framework that identifies surface details prior to editing and introduces them back afterwards. Combining these two features provides a detail-preserving level set editing capability that may be used for multi-resolution modeling and texture transfer. Given the complex data structures that are required to represent large-scale, multiresolution level set models and the compute-intensive numerical methods to evaluate them, optimization techniques and algorithms have been developed to evaluate and display the dynamic isosurface embedded in the volumetric data.Ph.D., Computer Science -- Drexel University, 201

    3D mesh animation system targeted for multi-touch environments

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 74-78.Fast developments in computer technology have given rise to different application areas such as multimedia, computer games, and Virtual Reality. All these application areas are based on animation of 3D models of real world objects. For this purpose, many tools have been developed to enable computer modeling and animation. Yet, most of these tools require a certain amount of experience about geometric modeling and animation principles, which creates a handicap for inexperienced users. This thesis introduces a solution to this problem by presenting a mesh animation system targeted specially for novice users. The main approach is based on one of the fundamental model representation concepts, Laplacian framework, which is successfully used in model editing applications. The solution presented perceives a model as a combination of smaller salient parts and uses the Laplacian framework to allow these parts to be manipulated simultaneously to produce a sense of movement. The interaction techniques developed enable users to carry manipulation and global transformation actions at the same time to create more pleasing results. Furthermore, the approach utilizes the multi-touch screen technology and direct manipulation principles to increase the usability of the system. The methods described are experimented by creating simple animations with several 3D models; which demonstrates the advantages of the proposed solution.Ceylan, DuyguM.S

    Physically Interacting With Four Dimensions

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another. The research presented here creates physically realistic models for simple interactions with objects and materials in a virtual 4D world. We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object. Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings

    Feature-rich distance-based terrain synthesis

    Get PDF
    This thesis describes a novel terrain synthesis method based on distances in a weighted graph. The method begins with a regular lattice with arbitrary edge weights; heights are determined by path cost from a set of generator nodes. The shapes of individual terrain features, such as mountains, hills, and craters, are specified by a monotonically decreasing profile describing the cross-sectional shape of a feature, while the locations of features in the terrain are specified by placing the generators. Pathing places ridges whose initial location have a dendritic shape. The method is robust and easy to control, making it possible to create pareidolia effects. It can produce a wide range of realistic synthetic terrains such as mountain ranges, craters, faults, cinder cones, and hills. The algorithm incorporates random graph edge weights, permits the inclusion of multiple topography profiles, and allows precise control over placement of terrain features and their heights. These properties all allow the artist to create highly heterogeneous terrains that compare quite favorably to existing methods

    Toward Controllable and Robust Surface Reconstruction from Spatial Curves

    Get PDF
    Reconstructing surface from a set of spatial curves is a fundamental problem in computer graphics and computational geometry. It often arises in many applications across various disciplines, such as industrial prototyping, artistic design and biomedical imaging. While the problem has been widely studied for years, challenges remain for handling different type of curve inputs while satisfying various constraints. We study studied three related computational tasks in this thesis. First, we propose an algorithm for reconstructing multi-labeled material interfaces from cross-sectional curves that allows for explicit topology control. Second, we addressed the consistency restoration, a critical but overlooked problem in applying algorithms of surface reconstruction to real-world cross-sections data. Lastly, we propose the Variational Implicit Point Set Surface which allows us to robustly handle noisy, sparse and non-uniform inputs, such as samples from spatial curves
    corecore