751 research outputs found

    Physically Interacting With Four Dimensions

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another. The research presented here creates physically realistic models for simple interactions with objects and materials in a virtual 4D world. We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object. Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings

    Embodied Interactions for Spatial Design Ideation: Symbolic, Geometric, and Tangible Approaches

    Get PDF
    Computer interfaces are evolving from mere aids for number crunching into active partners in creative processes such as art and design. This is, to a great extent, the result of mass availability of new interaction technology such as depth sensing, sensor integration in mobile devices, and increasing computational power. We are now witnessing the emergence of maker culture that can elevate art and design beyond the purview of enterprises and professionals such as trained engineers and artists. Materializing this transformation is not trivial; everyone has ideas but only a select few can bring them to reality. The challenge is the recognition and the subsequent interpretation of human actions into design intent

    Rotation-Based Mixed Formulations for an Elasticity-Poroelasticity Interface Problem

    Get PDF
    In this paper we introduce a new formulation for the stationary poroelasticity equations written using the rotation vector and the total fluid-solid pressure as additional unknowns, and we also write an extension to the elasticity-poroelasticity problem. The transmission conditions are imposed naturally in the weak formulation, and the analysis of the effective governing equations is conducted by an application of Fredholm's alternative. We also propose a monolithically coupled mixed finite element method for the numerical solution of the problem. Its convergence properties are rigorously derived and subsequently confirmed by a set of computational tests that include applications to subsurface flow in reservoirs as well as to dentistry-oriented problems.Fondo Nacional de Desarrollo Científico y Tecnológico/[11160706]/FONDECYT/ChilePrograma Concurso Apoyo a Centros Científicos y Tecnológicos de Excelencia/[AFB170001]/PIA/ChileUCR::Sedes Regionales::Sede de OccidenteUCR::Vicerrectoría de Docencia::Ciencias Básicas::Facultad de Ciencias::Escuela de Matemátic

    Toward Controllable and Robust Surface Reconstruction from Spatial Curves

    Get PDF
    Reconstructing surface from a set of spatial curves is a fundamental problem in computer graphics and computational geometry. It often arises in many applications across various disciplines, such as industrial prototyping, artistic design and biomedical imaging. While the problem has been widely studied for years, challenges remain for handling different type of curve inputs while satisfying various constraints. We study studied three related computational tasks in this thesis. First, we propose an algorithm for reconstructing multi-labeled material interfaces from cross-sectional curves that allows for explicit topology control. Second, we addressed the consistency restoration, a critical but overlooked problem in applying algorithms of surface reconstruction to real-world cross-sections data. Lastly, we propose the Variational Implicit Point Set Surface which allows us to robustly handle noisy, sparse and non-uniform inputs, such as samples from spatial curves

    A new 3D modelling paradigm for discrete model

    Get PDF
    Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges

    Geotechnical stability analysis using student versions of FLAC, PLAXIS and SLOPE/W

    Get PDF
    Slope stability analysis is of particular importance to Geotechnical Engineers as slope failures can have devastating social and economic impacts. There are several software packages developed for stability analysis which utilise the Limit Equilibrium (LE) Method, Finite Element (FE) method and Finite Difference (FD) method. The majority of published information is in regards to the slope stability analysis methods of Limit Equilibrium, Finite Element and Finite Difference and not the software packages themselves. Several studies have suggested that the FE and FD methods provide greater benefits than the LE method; however other studies have suggested that the simplicity of the LE method outweighs the complexity of the FE and FD methods. The purpose of this research project is to compare the student versions of FLAC, PLAXIS and SLOPE/W and their use in Geotechnical stability analysis. FLAC is a software package using the FD method; PLAXIS the FE method and SLOPE/W the LE method. From this report it can be concluded that for software packages using the FE or FD method the type of ‘mesh’ generated and utilised in calculating the FOS value has a significant effect on accuracy of the results. Due to the limit in the amount of zones allowed within the FLAC student version and in general only allowing a coarse mesh analysis it can be considered that the FOS values calculated are less accurate compared to the student versions of PLAXIS and SLOPE/W. Each package has its own benefits and limitations and it is recommended that the users choose the package that best suits the models requirements and its complexity. The student versions should be used as an indication only and any detailed analysis requires the use of a full licensed version of the chosen software package

    Tools for fluid simulation control in computer graphics

    Full text link
    L’animation basée sur la physique peut générer des systèmes aux comportements complexes et réalistes. Malheureusement, contrôler de tels systèmes est une tâche ardue. Dans le cas de la simulation de fluide, le processus de contrôle est particulièrement complexe. Bien que de nombreuses méthodes et outils ont été mis au point pour simuler et faire le rendu de fluides, trop peu de méthodes offrent un contrôle efficace et intuitif sur une simulation de fluide. Étant donné que le coût associé au contrôle vient souvent s’additionner au coût de la simulation, appliquer un contrôle sur une simulation à plus haute résolution rallonge chaque itération du processus de création. Afin d’accélérer ce processus, l’édition peut se faire sur une simulation basse résolution moins coûteuse. Nous pouvons donc considérer que la création d’un fluide contrôlé peut se diviser en deux phases: une phase de contrôle durant laquelle un artiste modifie le comportement d’une simulation basse résolution, et une phase d’augmentation de détail durant laquelle une version haute résolution de cette simulation est générée. Cette thèse présente deux projets, chacun contribuant à l’état de l’art relié à chacune de ces deux phases. Dans un premier temps, on introduit un nouveau système de contrôle de liquide représenté par un modèle particulaire. À l’aide de ce système, un artiste peut sélectionner dans une base de données une parcelle de liquide animé précalculée. Cette parcelle peut ensuite être placée dans une simulation afin d’en modifier son comportement. À chaque pas de simulation, notre système utilise la liste de parcelles actives afin de reproduire localement la vision de l’artiste. Une interface graphique intuitive a été développée, inspirée par les logiciels de montage vidéo, et permettant à un utilisateur non expert de simplement éditer une simulation de liquide. Dans un second temps, une méthode d’augmentation de détail est décrite. Nous proposons d’ajouter une étape supplémentaire de suivi après l’étape de projection du champ de vitesse d’une simulation de fumée eulérienne classique. Durant cette étape, un champ de perturbations de vitesse non-divergent est calculé, résultant en une meilleure correspondance des densités à haute et à basse résolution. L’animation de fumée résultante reproduit fidèlement l’aspect grossier de la simulation d’entrée, tout en étant augmentée à l’aide de détails simulés.Physics-based animation can generate dynamic systems of very complex and realistic behaviors. Unfortunately, controlling them is a daunting task. In particular, fluid simulation brings up particularly difficult problems to the control process. Although many methods and tools have been developed to convincingly simulate and render fluids, too few methods provide efficient and intuitive control over a simulation. Since control often comes with extra computations on top of the simulation cost, art-directing a high-resolution simulation leads to long iterations of the creative process. In order to shorten this process, editing could be performed on a faster, low-resolution model. Therefore, we can consider that the process of generating an art-directed fluid could be split into two stages: a control stage during which an artist modifies the behavior of a low-resolution simulation, and an upresolution stage during which a final high-resolution version of this simulation is driven. This thesis presents two projects, each one improving on the state of the art related to each of these two stages. First, we introduce a new particle-based liquid control system. Using this system, an artist selects patches of precomputed liquid animations from a database, and places them in a simulation to modify its behavior. At each simulation time step, our system uses these entities to control the simulation in order to reproduce the artist’s vision. An intuitive graphical user interface inspired by video editing tools has been developed, allowing a nontechnical user to simply edit a liquid animation. Second, a tracking solution for smoke upresolution is described. We propose to add an extra tracking step after the projection of a classical Eulerian smoke simulation. During this step, we solve for a divergence-free velocity perturbation field resulting in a better matching of the low-frequency density distribution between the low-resolution guide and the high-resolution simulation. The resulting smoke animation faithfully reproduces the coarse aspect of the low-resolution input, while being enhanced with simulated small-scale details

    A Framework for the Semantics-aware Modelling of Objects

    Get PDF
    The evolution of 3D visual content calls for innovative methods for modelling shapes based on their intended usage, function and role in a complex scenario. Even if different attempts have been done in this direction, shape modelling still mainly focuses on geometry. However, 3D models have a structure, given by the arrangement of salient parts, and shape and structure are deeply related to semantics and functionality. Changing geometry without semantic clues may invalidate such functionalities or the meaning of objects or their parts. We approach the problem by considering semantics as the formalised knowledge related to a category of objects; the geometry can vary provided that the semantics is preserved. We represent the semantics and the variable geometry of a class of shapes through the parametric template: an annotated 3D model whose geometry can be deformed provided that some semantic constraints remain satisfied. In this work, we design and develop a framework for the semantics-aware modelling of shapes, offering the user a single application environment where the whole workflow of defining the parametric template and applying semantics-aware deformations can take place. In particular, the system provides tools for the selection and annotation of geometry based on a formalised contextual knowledge; shape analysis methods to derive new knowledge implicitly encoded in the geometry, and possibly enrich the given semantics; a set of constraints that the user can apply to salient parts and a deformation operation that takes into account the semantic constraints and provides an optimal solution. The framework is modular so that new tools can be continuously added. While producing some innovative results in specific areas, the goal of this work is the development of a comprehensive framework combining state of the art techniques and new algorithms, thus enabling the user to conceptualise her/his knowledge and model geometric shapes. The original contributions regard the formalisation of the concept of annotation, with attached properties, and of the relations between significant parts of objects; a new technique for guaranteeing the persistence of annotations after significant changes in shape's resolution; the exploitation of shape descriptors for the extraction of quantitative information and the assessment of shape variability within a class; and the extension of the popular cage-based deformation techniques to include constraints on the allowed displacement of vertices. In this thesis, we report the design and development of the framework as well as results in two application scenarios, namely product design and archaeological reconstruction

    An evaluation of user experience with a sketch-based 3D modeling system

    Get PDF
    Abstract With the availability of pen-enabled digital hardware, sketch-based 3D modeling is becoming an increasingly attractive alternative to traditional methods in many design environments. To date, a variety of methodologies and implemented systems have been proposed that all seek to make sketching the primary interaction method for 3D geometric modeling. While many of these methods are promising, a general lack of end user evaluations makes it difficult to assess and improve upon these methods. Based on our ongoing work, we present the usage and a user evaluation of a sketch-based 3D modeling tool we have been developing for industrial styling design. The study investigates the usability of our techniques in the hands of non-experts by gauging (1) the speed with which users can comprehend and adopt to constituent modeling steps, and (2) how effectively users can utilize the newly learned skills to design 3D models. Our observations and users' feedback indicate that overall users could learn the investigated techniques relatively easily and put them in use immediately. However, users pointed out several usability and technical issues such as difficulty in mode selection and lack of sophisticated surface modeling tools as some of the key limitations of the current system. We believe the lessons learned from this study can be used in the development of more powerful and satisfying sketch-based modeling tools in the future.

    Interactions gestuelles multi-point et géométrie déformable pour l’édition 3D sur écran tactile

    Get PDF
    Despite the advances made in the fields of existing objects capture and of procedural generation, creation of content for virtual worlds can not be perform without human interaction. This thesis suggests to exploit new touch devices ("multi-touch" screens) to obtain an easy, intuitive 2D interaction in order to navigate inside a virtual environment, to manipulate, position and deform 3D objects.First, we study the possibilities and limitations of the hand and finger gestures while interacting on a touch screen in order to discover which gestures are the most adapted to edit 3D scene and environment. In particular, we evaluate the effective number of degrees of freedom of the human hand when constrained on a planar surface. Meanwhile, we develop a new gesture analysis method using phases to identify key motion of the hand and fingers in real time. These results, combined to several specific user-studies, lead to a gestural design pattern which handle not only navigation (camera positioning), but also object positioning, rotation and global scaling. Then, this pattern is extended to complex deformation (such as adding and deleting material, bending or twisting part of objects, using local control). Using these results, we are able to propose and evaluate a 3D world editing interface that handle a naturaltouch interaction, in which mode selection (i.e. navigation, object positioning or object deformation) and task selections is automatically processed by the system, relying on the gesture and the interaction context (without any menu or button). Finally, we extend this interface to integrate more complex deformations, adapting the garment transfer from a character to any other in order to process interactive deformation of the garment while the wearing character is deformed.Malgré les progrès en capture d’objets réels et en génération procédurale, la création de contenus pour les mondes virtuels ne peut se faire sans interaction humaine. Cette thèse propose d’exploiter les nouvelles technologies tactiles (écrans "multi-touch") pour offrir une interaction 2D simple et intuitive afin de naviguer dans un environnement virtuel, et d’y manipuler, positionner et déformer des objets 3D.En premier lieu, nous étudions les possibilité et les limitations gestuelles de la main et des doigts lors d’une interaction sur écran tactile afin de découvrir quels gestes semblent les plus adaptés à l’édition des environnements et des objets 3D. En particulier, nous évaluons le nombre de degré de liberté efficaces d’une main humaine lorsque son geste est contraint à une surface plane. Nous proposons également une nouvelle méthode d’analyse gestuelle par phases permettant d’identifier en temps réel les mouvements clés de la main et des doigts. Ces résultats, combinés à plusieurs études utilisateur spécifiques, débouchent sur l’identification d’un patron pour les interactions gestuelles de base incluant non seulement navigation (placement de caméra), mais aussi placement, rotation et mise à l’échelle des objets. Ce patron est étendudans un second temps aux déformations complexes (ajout et suppression de matière ainsi que courbure ou torsion des objets, avec contrôle de la localité). Tout ceci nous permet de proposer et d’évaluer une interface d’édition des mondes 3D permettant une interaction tactile naturelle, pour laquelle le choix du mode (navigation, positionnement ou déformation) et des tâches correspondantes est automatiquement géré par le système en fonction du geste et de son contexte (sans menu ni boutons). Enfin, nous étendons cette interface pour y intégrer des déformations plus complexe à travers le transfert de vêtements d’un personnage à un autre, qui est étendu pour permettre la déformation interactive du vêtement lorsque le personnage qui le porte est déformé par interaction tactile
    • …
    corecore