163 research outputs found

    3D Mesh Simplification Techniques for Enhanced Image Based Rendering

    Get PDF
    Three dimensional videos and virtual reality applications are gaining wide range of popularity in recent years. Virtual reality creates the feeling of 'being there' and provides more realistic experience than conventional 2D media. In order to feel the immersive experience, it is important to satisfy two important criteria namely, visual quality of the video and timely rendering. However, it is quite impractical to satisfy these goals, especially on low capability devices such as mobile phones. Careful analysis of the depth map and further processing may help in achieving these goals considerably. Advanced developments in the graphics hardware tremendously reduced the time required to render the images to be displayed. However, along with this development, the demand for more realism tend to increase the complexity of the model of the virtual environment. Complex models require millions of primitives which subsequently means millions of polygons to represent it. Wise selection of rendering technique offer one of the ways to reduce the rendering speed. Mesh-based rendering is one of the techniques which enhances the speed of rendering as compared to its counterpart pixel based rendering. However, due to the demand for richer experience, the number of polygons required, always seem to exceed the number of polygons the graphics hardware can efficiently render. In practice, it is not feasible to store large number of polygons because of storage limitations in mobile phone hardware. Furthermore, number of polygons increase the rendering speed, which would necessitate more powerful devices. Mesh simplification techniques offer solution to deal with complex models. These methods simplify unimportant and redundant part of the model which helps in reducing the rendering cost without negatively effecting the visual quality of the scene. Mesh simplification has been extensively studied, however, it is not applied to all the areas. For example, depth is one of the areas where general available simplification methods are not very well suitable as most of the methods do not consider depth discontinuities very well. Moreover, some of the state of the art methods are not capable of handling high resolution depth maps. In this thesis, an attempt is made to address the problem of combining the depth maps with mesh simplification. Aim of the thesis is to reduce the computational cost of rendering by taking the homogeneous and planar areas of the depth map into account, while still maintaining suitable visual quality of the rendered image. Different depth decimation techniques are implemented and compared with the available state of the art methods. We demonstrate that the depth decimation technique which fits the plane to depth area and considers the depth discontinuities, outperforms the state of the art methods clearly

    A System for 3D Shape Estimation and Texture Extraction via Structured Light

    Get PDF
    Shape estimation is a crucial problem in the fields of computer vision, robotics and engineering. This thesis explores a shape from structured light (SFSL) approach using a pyramidal laser projector, and the application of texture extraction. The specific SFSL system is chosen for its hardware simplicity, and efficient software. The shape estimation system is capable of estimating the 3D shape of both static and dynamic objects by relying on a fixed pattern. In order to eliminate the need for precision hardware alignment and to remove human error, novel calibration schemes were developed. In addition, selecting appropriate system geometry reduces the typical correspondence problem to that of a labeling problem. Simulations and experiments verify the effectiveness of the built system. Finally, we perform texture extraction by interpolating and resampling sparse range estimates, and subsequently flattening the 3D triangulated graph into a 2D triangulated graph via graph and manifold methods

    Sketch-based skeleton-driven 2D animation and motion capture.

    Get PDF
    This research is concerned with the development of a set of novel sketch-based skeleton-driven 2D animation techniques, which allow the user to produce realistic 2D character animation efficiently. The technique consists of three parts: sketch-based skeleton-driven 2D animation production, 2D motion capture and a cartoon animation filter. For 2D animation production, the traditional way is drawing the key-frames by experienced animators manually. It is a laborious and time-consuming process. With the proposed techniques, the user only inputs one image ofa character and sketches a skeleton for each subsequent key-frame. The system then deforms the character according to the sketches and produces animation automatically. To perform 2D shape deformation, a variable-length needle model is developed, which divides the deformation into two stages: skeleton driven deformation and nonlinear deformation in joint areas. This approach preserves the local geometric features and global area during animation. Compared with existing 2D shape deformation algorithms, it reduces the computation complexity while still yielding plausible deformation results. To capture the motion of a character from exiting 2D image sequences, a 2D motion capture technique is presented. Since this technique is skeleton-driven, the motion of a 2D character is captured by tracking the joint positions. Using both geometric and visual features, this problem can be solved by ptimization, which prevents self-occlusion and feature disappearance. After tracking, the motion data are retargeted to a new character using the deformation algorithm proposed in the first part. This facilitates the reuse of the characteristics of motion contained in existing moving images, making the process of cartoon generation easy for artists and novices alike. Subsequent to the 2D animation production and motion capture,"Cartoon Animation Filter" is implemented and applied. Following the animation principles, this filter processes two types of cartoon input: a single frame of a cartoon character and motion capture data from an image sequence. It adds anticipation and follow-through to the motion with related squash and stretch effect

    The VHP-F Computational Phantom and its Applications for Electromagnetic Simulations

    Get PDF
    Modeling of the electromagnetic, structural, thermal, or acoustic response of the human body to various external and internal stimuli is limited by the availability of anatomically accurate and numerically efficient computational models. The models currently approved for use are generally of proprietary or fixed format, preventing new model construction or customization. 1. This dissertation develops a new Visible Human Project - Female (VHP-F) computational phantom, constructed via segmentation of anatomical cryosection images taken in the axial plane of the human body. Its unique property is superior resolution on human head. In its current form, the VHP-F model contains 33 separate objects describing a variety of human tissues within the head and torso. Each obejct is a non-intersecting 2-manifold model composed of contiguous surface triangular elements making the VHP-F model compatible with major commercial and academic numerical simulators employing the Finite Element Method (FEM), Boundary Element Method (BEM), Finite Volume Method (FVM), and Finite-Difference Time-Domain (FDTD) Method. 2. This dissertation develops a new workflow used to construct the VHP-F model that may be utilized to build accessible custom models from any medical image data source. The workflow is customizable and flexible, enabling the creation of standard and parametrically varying models facilitating research on impacts associated with fluctuation of body characteristics (for example, skin thickness) and dynamic processes such as fluid pulsation. 3. This dissertation identifies, enables, and quantifies three new specific computational bioelectromagnetic problems, each of which is solved with the help of the developed VHP-F model: I. Transcranial Direct Current Stimulation (tDCS) of human brain motor cortex with extracephalic versus cephalic electrodes; II. RF channel characterization within cerebral cortex with novel small on-body directional antennas; III. Body Area Network (BAN) characterization and RF localization within the human body using the FDTD method and small antenna models with coincident phase centers. Each of those problems has been (or will be) the subject of a separate dedicated MS thesis

    A Comparative Study on Polygonal Mesh Simplification Algorithms

    Get PDF
    Polygonal meshes are a common way of representing three dimensional surface models in many different areas of computer graphics and geometry processing. However, with the evolution of the technology, polygonal models are becoming more and more complex. As the complexity of the models increase, the visual approximation to the real world objects get better but there is a trade-off between the cost of processing these models and better visual approximation. In order to reduce this cost, the number of polygons in a model can be reduced by mesh simplification algorithms. These algorithms are widely used such that nearly all of the popular mesh editing libraries include at least one of them. In this work, polygonal simplification algorithms that are embedded in open source libraries: CGAL, VTK and OpenMesh are compared with the Metro geometric error measuring tool. By this way we try to supply a guidance for developers for publicly available mesh libraries in order to implement polygonal mesh simplification

    Contours in Visualization

    Get PDF
    This thesis studies the visualization of set collections either via or defines as the relations among contours. In the first part, dynamic Euler diagrams are used to communicate and improve semimanually the result of clustering methods which allow clusters to overlap arbitrarily. The contours of the Euler diagram are rendered as implicit surfaces called blobs in computer graphics. The interaction metaphor is the moving of items into or out of these blobs. The utility of the method is demonstrated on data arising from the analysis of gene expressions. The method works well for small datasets of up to one hundred items and few clusters. In the second part, these limitations are mitigated employing a GPU-based rendering of Euler diagrams and mixing textures and colors to resolve overlapping regions better. The GPU-based approach subdivides the screen into triangles on which it performs a contour interpolation, i.e. a fragment shader determines for each pixel which zones of an Euler diagram it belongs to. The rendering speed is thus increased to allow multiple hundred items. The method is applied to an example comparing different document clustering results. The contour tree compactly describes scalar field topology. From the viewpoint of graph drawing, it is a tree with attributes at vertices and optionally on edges. Standard tree drawing algorithms emphasize structural properties of the tree and neglect the attributes. Adapting popular graph drawing approaches to the problem of contour tree drawing it is found that they are unable to convey this information. Five aesthetic criteria for drawing contour trees are proposed and a novel algorithm for drawing contour trees in the plane that satisfies four of these criteria is presented. The implementation is fast and effective for contour tree sizes usually used in interactive systems and also produces readable pictures for larger trees. Dynamical models that explain the formation of spatial structures of RNA molecules have reached a complexity that requires novel visualization methods to analyze these model\''s validity. The fourth part of the thesis focuses on the visualization of so-called folding landscapes of a growing RNA molecule. Folding landscapes describe the energy of a molecule as a function of its spatial configuration; they are huge and high dimensional. Their most salient features are described by their so-called barrier tree -- a contour tree for discrete observation spaces. The changing folding landscapes of a growing RNA chain are visualized as an animation of the corresponding barrier tree sequence. The animation is created as an adaption of the foresight layout with tolerance algorithm for dynamic graph layout. The adaptation requires changes to the concept of supergraph and it layout. The thesis finishes with some thoughts on how these approaches can be combined and how the task the application should support can help inform the choice of visualization modality

    Deformable Simplicial Complexes

    Get PDF
    In this dissertation we present a novel method for deformable interface tracking in 2D and 3D|deformable simplicial complexes (DSC). Deformable interfaces are used in several applications, such as fluid simulation, image analysis, reconstruction or structural optimization. In the DSC method, the interface (curve in 2D; surface in 3D) is represented explicitly as a piecewise linear curve or surface. However, the domain is also subject to discretization: triangulation in 2D; tetrahedralization in 3D. This way, the interface can be alternatively represented as a set of edges/triangles separating triangles/tetrahedra marked as outside from those marked as inside. Such an approach allows for robust topological adaptivity. Among other advantages of the deformable simplicial complexes there are: space adaptivity, ability to handle and preserve sharp features, possibility for topology control. We demonstrate those strengths in several applications. In particular, a novel, DSC-based fluid dynamics solver has been developed during the PhD project. A special feature of this solver is that due to the fact that DSC maintains an explicit interface representation, surface tension is more easily dealt with. One particular advantage of DSC is the fact that as an alternative to topology adaptivity, topology control is also possible. This is exploited in the construction of cut loci on tori where a front expands from a single point on a torus and stops when it self-intersects

    MĂ©tamorphose de maillage 3D

    Get PDF
    Cette thèse de doctorat aborde spécifiquement le problème de la métamorphose entre différents maillages 3D, qui peut assurer un niveau élevé de qualité pour la séquence de transition, qui devrait être aussi lisse et progressive que possible, cohérente par rapport à la géométrie et la topologie, et visuellement agréable. Les différentes étapes impliquées dans le processus de transformation sont développées dans cette thèse. Nos premières contributions concernent deux approches différentes des paramétrisations: un algorithme de mappage barycentrique basé sur la préservation des rapports de longueur et une technique de paramétrisation sphérique, exploitant la courbure Gaussien. L'évaluation expérimentale, effectuées sur des modèles 3D de formes variées, démontré une amélioration considérable en termes de distorsion maillage pour les deux méthodes. Afin d aligner les caractéristiques des deux modèles d'entrée, nous avons considéré une technique de déformation basée sur la fonction radial CTPS C2a approprié pour déformer le mappage dans le domaine paramétrique et maintenir un mappage valide a travers le processus de mouvement. La dernière contribution consiste d une une nouvelle méthode qui construit un pseudo metamaillage qui évite l'exécution et le suivi des intersections d arêtes comme rencontrées dans l'état-of-the-art. En outre, notre méthode permet de réduire de manière drastique le nombre de sommets normalement nécessaires dans une structure supermesh. Le cadre générale de métamorphose a été intégré dans une application prototype de morphing qui permet à l'utilisateur d'opérer de façon interactive avec des modèles 3D et de contrôler chaque étape du processusThis Ph.D. thesis specifically deals with the issue of metamorphosis of 3D objects represented as 3D triangular meshes. The objective is to elaborate a complete 3D mesh morphing methodology which ensures high quality transition sequences, smooth and gradual, consistent with respect to both geometry and topology, and visually pleasant. Our first contributions concern the two different approaches of parameterization: a new barycentric mapping algorithm based on the preservation of the mesh length ratios, and a spherical parameterization technique, exploiting a Gaussian curvature criterion. The experimental evaluation, carried out on 3D models of various shapes, demonstrated a considerably improvement in terms of mesh distortion for both methods. In order to align the features of the two input models, we have considered a warping technique based on the CTPS C2a radial basis function suitable to deform the models embeddings in the parametric domain maintaining a valid mapping through the entire movement process. We show how this technique has to be adapted in order to warp meshes specified in the parametric domains. A final contribution consists of a novel algorithm for constructing a pseudo-metamesh that avoids the complex process of edge intersections encountered in the state-of-the-art. The obtained mesh structure is characterized by a small number of vertices and it is able to approximate both the source and target shapes. The entire mesh morphing framework has been integrated in an interactive application that allows the user to control and visualize all the stages of the morphing processEVRY-INT (912282302) / SudocSudocFranceF

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions
    • …
    corecore