11 research outputs found

    An efficient active B-spline/nurbs model for virtual sculpting

    Get PDF
    This thesis presents an Efficient Active B-Spline/Nurbs Model for Virtual Sculpting. In spite of the on-going rapid development of computer graphics and computer-aided design tools, 3D graphics designers still rely on non-intuitive modelling procedures for the creation and manipulation of freeform virtual content. The ’Virtual Sculpting' paradigm is a well-established mechanism for shielding designers from the complex mathematics that underpin freeform shape design. The premise is to emulate familiar elements of traditional clay sculpting within the virtual design environment. Purely geometric techniques can mimic some physical properties. More exact energy-based approaches struggle to do so at interactive rates. This thesis establishes a unified approach for the representation of physically aware, energy-based, deformable models, across the domains of Computer Graphics, Computer-Aided Design and Computer Vision, and formalises the theoretical relationships between them. A novel reformulation of the computer vision approach of Active Contour Models (ACMs) is proposed for the domain of Virtual Sculpting. The proposed ACM-based model offers novel interaction behaviours and captures a compromise between purely geometric and more exact energy-based approaches, facilitating physically plausible results at interactive rates. Predefined shape primitives provide features of interest, acting like sculpting tools such that the overall deformation of an Active Surface Model is analogous to traditional clay modelling. The thesis develops a custom-approach to provide full support for B-Splines, the de facto standard industry representation of freeform surfaces, which have not previously benefited from the seamless embodiment of a true Virtual Sculpting metaphor. A novel generalised computationally efficient mathematical framework for the energy minimisation of an Active B-Spline Surface is established. The resulting algorithm is shown to significantly reduce computation times and has broader applications across the domains of Computer-Aided Design, Computer Graphics, and Computer Vision. A prototype ’Virtual Sculpting’ environment encapsulating each of the outlined approaches is presented that demonstrates their effectiveness towards addressing the long-standing need for a computationally efficient and intuitive solution to the problem of interactive computer-based freeform shape design

    Tools for fluid simulation control in computer graphics

    Full text link
    L’animation basée sur la physique peut générer des systèmes aux comportements complexes et réalistes. Malheureusement, contrôler de tels systèmes est une tâche ardue. Dans le cas de la simulation de fluide, le processus de contrôle est particulièrement complexe. Bien que de nombreuses méthodes et outils ont été mis au point pour simuler et faire le rendu de fluides, trop peu de méthodes offrent un contrôle efficace et intuitif sur une simulation de fluide. Étant donné que le coût associé au contrôle vient souvent s’additionner au coût de la simulation, appliquer un contrôle sur une simulation à plus haute résolution rallonge chaque itération du processus de création. Afin d’accélérer ce processus, l’édition peut se faire sur une simulation basse résolution moins coûteuse. Nous pouvons donc considérer que la création d’un fluide contrôlé peut se diviser en deux phases: une phase de contrôle durant laquelle un artiste modifie le comportement d’une simulation basse résolution, et une phase d’augmentation de détail durant laquelle une version haute résolution de cette simulation est générée. Cette thèse présente deux projets, chacun contribuant à l’état de l’art relié à chacune de ces deux phases. Dans un premier temps, on introduit un nouveau système de contrôle de liquide représenté par un modèle particulaire. À l’aide de ce système, un artiste peut sélectionner dans une base de données une parcelle de liquide animé précalculée. Cette parcelle peut ensuite être placée dans une simulation afin d’en modifier son comportement. À chaque pas de simulation, notre système utilise la liste de parcelles actives afin de reproduire localement la vision de l’artiste. Une interface graphique intuitive a été développée, inspirée par les logiciels de montage vidéo, et permettant à un utilisateur non expert de simplement éditer une simulation de liquide. Dans un second temps, une méthode d’augmentation de détail est décrite. Nous proposons d’ajouter une étape supplémentaire de suivi après l’étape de projection du champ de vitesse d’une simulation de fumée eulérienne classique. Durant cette étape, un champ de perturbations de vitesse non-divergent est calculé, résultant en une meilleure correspondance des densités à haute et à basse résolution. L’animation de fumée résultante reproduit fidèlement l’aspect grossier de la simulation d’entrée, tout en étant augmentée à l’aide de détails simulés.Physics-based animation can generate dynamic systems of very complex and realistic behaviors. Unfortunately, controlling them is a daunting task. In particular, fluid simulation brings up particularly difficult problems to the control process. Although many methods and tools have been developed to convincingly simulate and render fluids, too few methods provide efficient and intuitive control over a simulation. Since control often comes with extra computations on top of the simulation cost, art-directing a high-resolution simulation leads to long iterations of the creative process. In order to shorten this process, editing could be performed on a faster, low-resolution model. Therefore, we can consider that the process of generating an art-directed fluid could be split into two stages: a control stage during which an artist modifies the behavior of a low-resolution simulation, and an upresolution stage during which a final high-resolution version of this simulation is driven. This thesis presents two projects, each one improving on the state of the art related to each of these two stages. First, we introduce a new particle-based liquid control system. Using this system, an artist selects patches of precomputed liquid animations from a database, and places them in a simulation to modify its behavior. At each simulation time step, our system uses these entities to control the simulation in order to reproduce the artist’s vision. An intuitive graphical user interface inspired by video editing tools has been developed, allowing a nontechnical user to simply edit a liquid animation. Second, a tracking solution for smoke upresolution is described. We propose to add an extra tracking step after the projection of a classical Eulerian smoke simulation. During this step, we solve for a divergence-free velocity perturbation field resulting in a better matching of the low-frequency density distribution between the low-resolution guide and the high-resolution simulation. The resulting smoke animation faithfully reproduces the coarse aspect of the low-resolution input, while being enhanced with simulated small-scale details

    Doctor of Philosophy

    Get PDF
    dissertationVolumetric parameterization is an emerging field in computer graphics, where volumetric representations that have a semi-regular tensor-product structure are desired in applications such as three-dimensional (3D) texture mapping and physically-based simulation. At the same time, volumetric parameterization is also needed in the Isogeometric Analysis (IA) paradigm, which uses the same parametric space for representing geometry, simulation attributes and solutions. One of the main advantages of the IA framework is that the user gets feedback directly as attributes of the NURBS model representation, which can represent geometry exactly, avoiding both the need to generate a finite element mesh and the need to reverse engineer the simulation results from the finite element mesh back into the model. Research in this area has largely been concerned with issues of the quality of the analysis and simulation results assuming the existence of a high quality volumetric NURBS model that is appropriate for simulation. However, there are currently no generally applicable approaches to generating such a model or visualizing the higher order smooth isosurfaces of the simulation attributes, either as a part of current Computer Aided Design or Reverse Engineering systems and methodologies. Furthermore, even though the mesh generation pipeline is circumvented in the concept of IA, the quality of the model still significantly influences the analysis result. This work presents a pipeline to create, analyze and visualize NURBS geometries. Based on the concept of analysis-aware modeling, this work focusses in particular on methodologies to decompose a volumetric domain into simpler pieces based on appropriate midstructures by respecting other relevant interior material attributes. The domain is decomposed such that a tensor-product style parameterization can be established on the subvolumes, where the parameterization matches along subvolume boundaries. The volumetric parameterization is optimized using gradient-based nonlinear optimization algorithms and datafitting methods are introduced to fit trivariate B-splines to the parameterized subvolumes with guaranteed order of accuracy. Then, a visualization method is proposed allowing to directly inspect isosurfaces of attributes, such as the results of analysis, embedded in the NURBS geometry. Finally, the various methodologies proposed in this work are demonstrated on complex representations arising in practice and research

    MPEG-4 content creation: integration of MPEG-4 content creation tools into an existing animation tool

    Get PDF
    This thesis provides a complete framework that enables the creation of photorealistic 3D human models in real-world environments. The approach allows a non-expert user to use any digital capture device to obtain four images of an individual and create a personalised 3D model, for multimedia applications. To achieve this, it is necessary that the system is automatic and that the reconstruction process is flexible to account for information that is not available or incorrectly captured. In this approach the individual is automatically extracted from the environment using constrained active B-spline templates that are scaled and automatically initialised using only image information. These templates incorporate the energy minimising framework for Active Contour Models, providing a suitable and flexible method to deal with the adjustments in pose an individual can adopt. The final states o f the templates describe the individual’s shape. The contours in each view are combined to form a 3D B-spline surface that characterises an individual’s maximal silhouette equivalent. The surface provides a mould that contains sufficient information to allow for the active deformation of an underlying generic human model. This modelling approach is performed using a novel technique that evolves active-meshes to 3D for deforming the underlying human model, while adaptively constraining it to preserve its existing structure. The active-mesh approach incorporates internal constraints that maintain the structural relationship of the vertices of the human model, while external forces deform the model congruous to the 3D surface mould. The strength of the internal constraints can be reduced to allow the model to adopt the exact shape o f the bounding volume or strengthened to preserve the internal structure, particularly in areas of high detail. This novel implementation provides a uniform framework that can be simply and automatically applied to the entire human model
    corecore