31 research outputs found

    A Thesis on Sketch-Based Techniques for Mesh Deformation and Editing

    Get PDF
    The goal of this research is to develop new and more intuitive ways for editing a mesh from a static camera angle. I present two ways to edit a mesh via a simple sketching system. The first method is a gray-scale editor which allows the user to specify a fall off function for the region being deformed. The second method is a profile editor in which the user can re-sketch a mesh’s profile. Lastly, the types of edits possible will be discussed and our results will be presented

    Mediating Cognitive Transformation with VR 3D Sketching during Conceptual Architectural Design Process

    Get PDF
    Communications for information synchronization during the conceptual design phase require designers to employ more intuitive digital design tools. This paper presents findings of a feasibility study for using VR 3D sketching interface in order to replace current non-intuitive CAD tools. We used a sequential mixed method research methodology including a qualitative case study and a cognitive-based quantitative protocol analysis experiment. Foremost, the case study research was conducted in order to understand how novice designers make intuitive decisions. The case study documented the failure of conventional sketching methods in articulating complicated design ideas and shortcomings of current CAD tools in intuitive ideation. The case study’s findings then became the theoretical foundations for testing the feasibility of using VR 3D sketching interface during design. The latter phase of study evaluated the designers’ spatial cognition and collaboration at six different levels: “physical-actions”, “perceptualac ons”, “functional-actions”, “conceptual-actions”, “cognitive synchronizations”, and “gestures”. The results and confirmed hypotheses showed that the utilized tangible 3D sketching interface improved novice designers’ cognitive and collaborative design activities. In summary this paper presents the influences of current external representation tools on designers’ cognition and collaboration as well as providing the necessary theoretical foundations for implementing VR 3D sketching interface. It contributes towards transforming conceptual architectural design phase from analogue to digital by proposing a new VR design interface. The paper proposes this transformation to fill in the existing gap between analogue conceptual architectural design process and remaining digital engineering parts of building design process hence expediting digital design process

    Efficient sketch-based 3D character modelling.

    Get PDF
    Sketch-based modelling (SBM) has undergone substantial research over the past two decades. In the early days, researchers aimed at developing techniques useful for modelling of architectural and mechanical models through sketching. With the advancement of technology used in designing visual effects for film, TV and games, the demand for highly realistic 3D character models has skyrocketed. To allow artists to create 3D character models quickly, researchers have proposed several techniques for efficient character modelling from sketched feature curves. Moreover several research groups have developed 3D shape databases to retrieve 3D models from sketched inputs. Unfortunately, the current state of the art in sketch-based organic modelling (3D character modelling) contains a lot of gaps and limitations. To bridge the gaps and improve the current sketch-based modelling techniques, this research aims to develop an approach allowing direct and interactive modelling of 3D characters from sketched feature curves, and also make use of 3D shape databases to guide the artist to create his / her desired models. The research involved finding a fusion between 3D shape retrieval, shape manipulation, and shape reconstruction / generation techniques backed by an extensive literature review, experimentation and results. The outcome of this research involved devising a novel and improved technique for sketch-based modelling, the creation of a software interface that allows the artist to quickly and easily create realistic 3D character models with comparatively less effort and learning. The proposed research work provides the tools to draw 3D shape primitives and manipulate them using simple gestures which leads to a better modelling experience than the existing state of the art SBM systems

    A feasibility study for developing 3D sketching concept in virtual reality (VR) environment

    Get PDF
    There is limited digital media available to encompass conceptual design which requires spontaneous and flexible design tools. The constraint is causing less digital integration during the architectural conceptual and engineering design stages. This paper presents the results of an ethnography research on understanding how design collaboration, design transactions and knowledge flow characteristics between studio masters and their students are supported by available technologies in a studio project in Malaysia. The study found three types of external representation modes used by designers: Full Manual, Mixed and Full Digital. The study revealed the inflexibility of traditional geometric modeling tools within intuitive ideations. On the other hand, it also observed the shortcomings of conventional manual sketching tools for articulating design ideas and translating tacit knowledge into explicit knowledge in complex design problems. Results from this study support further studies towards implementing 3D sketching in Virtual Reality (VR) environment to digitally integrate the conceptual architectural-engineering design process

    Conceptual free-form styling in virtual environments

    Get PDF
    This dissertation introduces the tools for designing complete models from scratch directly in a head-tracked, table-like virtual work environment. The models consist of free-form surfaces, and are constructed by drawing a network of curves directly in space. This is accomplished by using a tracked pen-like input device. Interactive deformation tools for curves and surfaces are proposed and are based on variational methods. By aligning the model with the left hand, editing is made possible with the right hand, corresponding to a natural distribution of tasks using both hands. Furthermore, in the emerging field of 3D interaction in virtual environments, particularly with regard to system control, this work uses novel methods to integrate system control tasks, such as selecting tools, and workflow of shape design. The aim of this work is to propose more suitable user interfaces to computersupported conceptual shape design applications. This would be beneficial since it is a field that lacks adequate support from standard desktop systems.Diese Dissertation beschreibtWerkzeuge zum Entwurf kompletter virtueller Modelle von Grund auf. Dies geschieht direkt in einer tischartigen, virtuellen Arbeitsumge-bung mit Hilfe von Tracking der Hände und der Kopfposition. Die Modelle sind aus Freiformlächen aufgebaut und werden als Netz von Kurven mit Hilfe eines getrack-ten, stiftartigen Eingabegerätes direkt im Raum gezeichnet. Es werden interaktive Deformationswerkzeuge für Kurven und Flächen vorgestellt, die auf Methoden des Variational Modeling basieren. Durch das Ausrichten des Modells mit der linken Hand wird das Editieren mit der rechten Hand erleichtert. Dies entspricht einer natürlichen Aufteilung von Aufgaben auf beide Hände. Zusätzlich stellt diese Arbeit neue Techniken für die 3D-Interaktion in virtuellen Umgebungen, insbesondere im Bereich Anwendungskontrolle, vor, die die Aufgabe der Werkzeugauswahl in den Arbeitsablauf der Formgestaltung integrieren. Das Ziel dieser Arbeit ist es, besser geeignete Schnittstellen für den computer-unterstützten, konzeptionellen Formentwurf zur Verfügung zu stellen; ein Gebiet, für das Standard-Desktop-Systeme wenig geeignete Unterstützung bieten

    Efficient and detailed sketch-based character modelling with composite generalized elliptic curves and ODE surface creators.

    Get PDF
    Sketch-based modelling (SBM), dating back to 1980s, has attracted a lot of researches’ attention due to its easy-to-use features and high efficiency in generating 3D models. However, existing sketch-based modelling approaches are incapable in creating detailed and realistic 3D character models. This project aims to propose new techniques which can create more detailed 3D character models with easiness and efficiency. The basic idea is to fit primitives to the sketches consisting of front view contours, side view contours and cross-section curves to obtain more detailed shape, propose ODE (ordinary differential equation) driven deformation to create more realistic shapes, and use surfaces defined by cross-sectional curves to represent sketch-based and ODE-driven 3D character models. In order to achieve the above aim, this thesis firstly investigates curve fitting of cross-sectional shapes and solved the problem of representing cross-sectional curves with generalized ellipses or composite generalized elliptic segments. Then, this thesis proposes a new mathematical formula for defining a surface from the cross-sectional curves. A new sketch-guided and ODE-driven character modelling technique is proposed, consisting of two main components: primitive deformer and detail generator. With such a technique, I first draw 2D silhouette contours of a character model. Then, I select proper primitives and align them with the corresponding silhouette contours. After that, I develope a sketch-guided and ODE-driven primitive deformer. It uses ODE-based deformations to deform the cross-section curves of the primitive to exactly match the generated 2D silhouette contours in one view plane and with the curve-fitting method and surface re-construction method mentioned above, a base mesh of a character model consisting of deformed primitive is obtained. In order to add various 3D details, I develop a local shape generator which uses sketches in different view planes to define a local shape and employs ODE-driven deformations to create a local surface passing through all the sketches. The experimental results demonstrate that the proposed approach can create 3D character models with 3D details from 2D sketches easily, quickly and precisely. Cross-section contours are important in defining cross-section shapes and creating detailed models. In order to develop a cross-section contour- based modelling approach, how to mathematically represent cross-section curves must be first solved. The second aim of this project is to propose composite generalized elliptic curves and introduce them into character modelling to achieve an analytical and compact mathematical representation of cross-section contours. Current template-based character modelling which creates 3D character models from sketches retrieves and then uses 3D template models directly. Since retrieving 3D models from sketches is not an easy task, the third aim of this project is to extract 2D cross-section contours from template models and use the extracted 2D cross section contours as templates to assist the creation of 3D character models for simplifying and accelerating the modelling process. Although there are many different approaches to interpret shapes with sketch strokes, but to our knowledge, no one utilises 2D template cross-section contours to quickly generate the shapes of human characters in a sketch-based system, which is one of the contributions of this project

    Interactions gestuelles multi-point et géométrie déformable pour l’édition 3D sur écran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.Malgré les progrès en capture d’objets réels et en génération procédurale, la création de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thèse propose d’exploiter les nouvelles technologies tactiles (écrans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et déformer des objets 3D.En premier lieu, nous étudions les possibilité et les limitations gestuelles de la main et des doigts lors d’une interaction sur écran tactile afin de découvrir quels gestes semblent les plus adaptés à l’édition des environnements et des objets 3D. En particulier, nous évaluons le nombre de degré de liberté efficaces d’une main humaine lorsque son geste est contraint à une surface plane. Nous proposons également une nouvelle méthode d’analyse gestuelle par phases permettant d’identifier en temps réel les mouvements clés de la main et des doigts. Ces résultats, combinés à plusieurs études utilisateur spécifiques, débouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de caméra), mais aussi placement, rotation et mise à l’échelle des objets. Ce patron est étendudans un second temps aux déformations complexes (ajout et suppression de matière ainsi que courbure ou torsion des objets, avec contrôle de la localité). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou déformation) et des tâches correspondantes est automatiquement géré par le système en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous étendons cette interface pour y intégrer des déformations plus complexe à travers le transfert de vêtements d’un personnage à un autre, qui est étendu pour permettre la déformation interactive du vêtement lorsque le personnage qui le porte est déformé par interaction tactile

    Hybrid sketching : a new middle ground between 2- and 3-D

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 2005.Includes bibliographical references (leaves 124-133).This thesis investigates the geometric representation of ideas during the early stages of design. When a designer's ideas are still in gestation, the exploration of form is more important than its precise specification. Digital modelers facilitate such exploration, but only for forms built with discrete collections of high-level geometric primitives; we introduce techniques that operate on designers' medium of choice, 2-D sketches. Designers' explorations also shift between 2-D and 3-D, yet 3-D form must also be specified with these high-level primitives, requiring an entirely different mindset from 2-D sketching. We introduce a new approach to transform existing 2-D sketches directly into a new kind of sketch-like 3-D model. Finally, we present a novel sketching technique that removes the distinction between 2-D and 3-D altogether. This thesis makes five contributions: point-dragging and curve-drawing techniques for editing sketches; two techniques to help designers bring 2-D sketches to 3-D; and a sketching interface that dissolves the boundaries between 2-D and 3-D representation. The first two contributions of this thesis introduce smooth exploration techniques that work on sketched form composed of strokes, in 2-D or 3-D. First, we present a technique, inspired by classical painting practices, whereby the designer can explore a range of curves with a single stroke. As the user draws near an existing curve, our technique automatically and interactively replaces sections of the old curve with the new one. Second, we present a method to enable smooth exploration of sketched form by point-dragging. The user constructs a high-level "proxy" description that can be used, somewhat like a skeleton, to deform a sketch independent of(cont.) the internal stroke description. Next, we leverage the proxy deformation capability to help the designer move directly from existing 2-D sketches to 3-D models. Our reconstruction techniques generate a novel kind of 3-D model which maintains the appearance and stroke structure of the original 2-D sketch. One technique transforms a single sketch with help from annotations by the designer; the other combines two sketches. Since these interfaces are user-guided, they can operate on ambiguous sketches, relying on the designer to choose an interpretation. Finally, we present an interface to build an even sparser, more suggestive, type of 3-D model, either from existing sketches or from scratch. "Camera planes" provide a complex 3-D scaffolding on which to hang sketches, which can still be drawn as rapidly and freely as before. A sparse set of 2-D sketches placed on planes provides a novel visualization of 3-D form, with enough information present to suggest 3-D shape, but enough missing that the designer can 'read into' the form, seeing multiple possibilities. This unspecified information--this empty space--can spur the designer on to new ideas.by John Alex.Ph.D
    corecore