965 research outputs found

    Surface reconstruction using variational interpolation

    Get PDF
    Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer

    Large Model Visualization : Techniques and Applications

    Get PDF
    The size of datasets in scientific computing is rapidly increasing. This increase is caused by a boost of processing power in the past years, which in turn was invested in an increase of the accuracy and the size of the models. A similar trend enabled a significant improvement of medical scanners; more than 1000 slices of a resolution of 512x512 can be generated by modern scanners in daily practice. Even in computer-aided engineering typical models eas-ily contain several million polygons. Unfortunately, the data complexity is growing faster than the rendering performance of modern computer systems. This is not only due to the slower growing graphics performance of the graphics subsystems, but in particular because of the significantly slower growing memory bandwidth for the transfer of the geometry and image data from the main memory to the graphics accelerator. Large model visualization addresses this growing divide between data complexity and rendering performance. Most methods focus on the reduction of the geometric or pixel complexity, and hence also the memory bandwidth requirements are reduced. In this dissertation, we discuss new approaches from three different research areas. All approaches target at the reduction of the processing complexity to achieve an interactive visualization of large datasets. In the second part, we introduce applications of the presented ap-proaches. Specifically, we introduce the new VIVENDI system for the interactive virtual endoscopy and other applications from mechanical engineering, scientific computing, and architecture.The size of datasets in scientific computing is rapidly increasing. This increase is caused by a boost of processing power in the past years, which in turn was invested in an increase of the accuracy and the size of the models. A similar trend enabled a significant improvement of medical scanners; more than 1000 slices of a resolution of 512x512 can be generated by modern scanners in daily practice. Even in computer-aided engineering typical models eas-ily contain several million polygons. Unfortunately, the data complexity is growing faster than the rendering performance of modern computer systems. This is not only due to the slower growing graphics performance of the graphics subsystems, but in particular because of the significantly slower growing memory bandwidth for the transfer of the geometry and image data from the main memory to the graphics accelerator. Large model visualization addresses this growing divide between data complexity and rendering performance. Most methods focus on the reduction of the geometric or pixel complexity, and hence also the memory bandwidth requirements are reduced. In this dissertation, we discuss new approaches from three different research areas. All approaches target at the reduction of the processing complexity to achieve an interactive visualization of large datasets. In the second part, we introduce applications of the presented ap-proaches. Specifically, we introduce the new VIVENDI system for the interactive virtual endoscopy and other applications from mechanical engineering, scientific computing, and architecture

    Morphometric Otolith Analysis

    Get PDF
    Abstract Fish otoliths have long played an important role in sustainable �sheries management. Stock assessment models currently used rely on species speci�c age pro�les obtained from the seasonal patterns of growth marks that otoliths exhibit. We compare methods widely used in �sheries science (elliptical Fourier) with an industry standardised encoding method (MPEG7 - Curvature-Scale-Space) and with a recent addition to shape modelling techniques (time-series shapelets) to determine which performs best. An investigation is carried out into transform methods that retain size-information, and whether the boundary encoding method is impacted be otolith age, performing tests over three 2-class otolith datasets across six discrete and concurrent age groups. Impact of segmentation methods are assessed to determine whether automated or expert segmented methods of boundary extraction are more advantageous, and whether constructed classi�ers can be used at di�erent institutions. Tests show that neither time-series shaplets nor Curvature-Scale-Space methods o�er any real advantage over Fourier transform methods given mixed age datasets. However, we show that size indices are most indicative of �sheries stock in younger single-age datasets, with shape holding more discriminatory potential in older samples. Whilst commonly used Fourier transform methods generally return best results; we show that classi�cation of otolith boundaries is impacted by the method of boundary segmentation. Hand traced boundaries produce classi�ers more robust to test data segmentation methods and are more suited to distributed classi�ers. Additionally we present a proof of concept study showing that high energy synchrotron scans are a new, non-invasive method of modelling internal otolith structure, allowing comparison of slices along near in�nite numbers of virtual complex planes

    Extracting root system architecture from X-ray micro computed tomography images using visual tracking

    Get PDF
    X-ray micro computed tomography (µCT) is increasingly applied in plant biology as an imaging system that is valuable for the study of root development in soil, since it allows the three-dimensional and non-destructive visualisation of plant root systems. Variations in the X-ray attenuation values of root material and the overlap in measured intensity values between roots and soil caused by water and organic matter represent major challenges to the extraction of root system architecture. We propose a novel technique to recover root system information from X-ray CT data, using a strategy based on a visual tracking framework embedding a modiffed level set method that is evolved using the Jensen-Shannon divergence. The model-guided search arising from the visual tracking approach makes the method less sensitive to the natural ambiguity of X-ray attenuation values in the image data and thus allows a better extraction of the root system. The method is extended by mechanisms that account for plagiatropic response in roots as well as collision between root objects originating from different plants that are grown and interact within the same soil environment. Experimental results on monocot and dicot plants, grown in different soil textural types, show the ability of successfully extracting root system information. Various global root system traits are measured from the extracted data and compared to results obtained with alternative methods

    Reconstruction and rendering of time-varying natural phenomena

    Get PDF
    While computer performance increases and computer generated images get ever more realistic, the need for modeling computer graphics content is becoming stronger. To achieve photo-realism detailed scenes have to be modeled often with a significant amount of manual labour. Interdisciplinary research combining the fields of Computer Graphics, Computer Vision and Scientific Computing has led to the development of (semi-)automatic modeling tools freeing the user of labour-intensive modeling tasks. The modeling of animated content is especially challenging. Realistic motion is necessary to convince the audience of computer games, movies with mixed reality content and augmented reality applications. The goal of this thesis is to investigate automated modeling techniques for time-varying natural phenomena. The results of the presented methods are animated, three-dimensional computer models of fire, smoke and fluid flows.Durch die steigende Rechenkapazität moderner Computer besteht die Möglichkeit immer realistischere Bilder virtuell zu erzeugen. Dadurch entsteht ein größerer Bedarf an Modellierungsarbeit um die nötigen Objekte virtuell zu beschreiben. Um photorealistische Bilder erzeugen zu können müssen sehr detaillierte Szenen, oft in mühsamer Handarbeit, modelliert werden. Ein interdisziplinärer Forschungszweig, der Computergrafik, Bildverarbeitung und Wissenschaftliches Rechnen verbindet, hat in den letzten Jahren die Entwicklung von (semi-)automatischen Methoden zur Modellierung von Computergrafikinhalten vorangetrieben. Die Modellierung dynamischer Inhalte ist dabei eine besonders anspruchsvolle Aufgabe, da realistische Bewegungsabläufe sehr wichtig für eine überzeugende Darstellung von Computergrafikinhalten in Filmen, Computerspielen oder Augmented-Reality Anwendungen sind. Das Ziel dieser Arbeit ist es automatische Modellierungsmethoden für dynamische Naturerscheinungen wie Wasserfluss, Feuer, Rauch und die Bewegung erhitzter Luft zu entwickeln. Das Resultat der entwickelten Methoden sind dabei dynamische, dreidimensionale Computergrafikmodelle

    From 3D Models to 3D Prints: an Overview of the Processing Pipeline

    Get PDF
    Due to the wide diffusion of 3D printing technologies, geometric algorithms for Additive Manufacturing are being invented at an impressive speed. Each single step, in particular along the Process Planning pipeline, can now count on dozens of methods that prepare the 3D model for fabrication, while analysing and optimizing geometry and machine instructions for various objectives. This report provides a classification of this huge state of the art, and elicits the relation between each single algorithm and a list of desirable objectives during Process Planning. The objectives themselves are listed and discussed, along with possible needs for tradeoffs. Additive Manufacturing technologies are broadly categorized to explicitly relate classes of devices and supported features. Finally, this report offers an analysis of the state of the art while discussing open and challenging problems from both an academic and an industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and Innovation action; Grant agreement N. 68044

    Three dimensional flame reconstruction towards the study of fire-induced transmission line flashovers.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2007.The work presented in this thesis focuses on the problem of reconstructing threedimensional models of fire from real images. The intended application of the reconstructions is for use in research into the phenomenon of fire-induced high voltage flashover, which, while a common problem, is not fully understood. As such the reconstruction must estimate not only the geometry of the flame but also the internal density structure, using only a set of a few synchronised images. Current flame reconstruction techniques are investigated, revealing that relatively little work has been done on the subject, and that most techniques follow either an exclusively geometric or tomographic direction. A novel method, termed the 3D Fuzzy Hull method, is proposed, incorporating aspects of tomography, statistical image segmentation and traditional object reconstruction techniques. By using physically based principles the flame images are related to the relative flame density, allowing the problem to be tackled from a tomographic perspective. A variation of algebraic tomography is then used to estimate the internal density field of the flame. This is done within a geometric framework by integrating the fuzzy c-means image segmentation technique and the visual hull concept into the process. Results are presented using synthetic and real flame image sets

    Modeling and Simulation in Engineering

    Get PDF
    This book provides an open platform to establish and share knowledge developed by scholars, scientists, and engineers from all over the world, about various applications of the modeling and simulation in the design process of products, in various engineering fields. The book consists of 12 chapters arranged in two sections (3D Modeling and Virtual Prototyping), reflecting the multidimensionality of applications related to modeling and simulation. Some of the most recent modeling and simulation techniques, as well as some of the most accurate and sophisticated software in treating complex systems, are applied. All the original contributions in this book are jointed by the basic principle of a successful modeling and simulation process: as complex as necessary, and as simple as possible. The idea is to manipulate the simplifying assumptions in a way that reduces the complexity of the model (in order to make a real-time simulation), but without altering the precision of the results

    From small to large baseline multiview stereo : dealing with blur, clutter and occlusions

    Get PDF
    This thesis addresses the problem of reconstructing the three-dimensional (3D) digital model of a scene from a collection of two-dimensional (2D) images taken from it. To address this fundamental computer vision problem, we propose three algorithms. They are the main contributions of this thesis. First, we solve multiview stereo with the o -axis aperture camera. This system has a very small baseline as images are captured from viewpoints close to each other. The key idea is to change the size or the 3D location of the aperture of the camera so as to extract selected portions of the scene. Our imaging model takes both defocus and stereo information into account and allows to solve shape reconstruction and image restoration in one go. The o -axis aperture camera can be used in a small-scale space where the camera motion is constrained by the surrounding environment, such as in 3D endoscopy. Second, to solve multiview stereo with large baseline, we present a framework that poses the problem of recovering a 3D surface in the scene as a regularized minimal partition problem of a visibility function. The formulation is convex and hence guarantees that the solution converges to the global minimum. Our formulation is robust to view-varying extensive occlusions, clutter and image noise. At any stage during the estimation process the method does not rely on the visual hull, 2D silhouettes, approximate depth maps, or knowing which views are dependent(i.e., overlapping) and which are independent( i.e., non overlapping). Furthermore, the degenerate solution, the null surface, is not included as a global solution in this formulation. One limitation of this algorithm is that its computation complexity grows with the number of views that we combine simultaneously. To address this limitation, we propose a third formulation. In this formulation, the visibility functions are integrated within a narrow band around the estimated surface by setting weights to each point along optical rays. This thesis presents technical descriptions for each algorithm and detailed analyses to show how these algorithms improve existing reconstruction techniques
    corecore