26 research outputs found

    Dyadic Splines

    Get PDF
    Dyadic splines are a simple and efficient function representation that supports multiresolution design and analysis. These splines are defined as limits of a process that alternately doubles and perturbs a sequence of points, using B-spline subdivision to smoothly perform the doubling. An interval-query algorithm is presented that efficiently and flexibly evaluates a limit function for points and intervals. Methods are given for fitting these functions to input data, and for minimizing the energy and redundancy of the representation. Several methods are given for designing dyadic splines by controlling the perturbations of the limit process. Several applications are explored, including shape design, synthesis of terrain and other natural forms, and compression

    Three-dimensional modeling of natural heterogeneous objects

    Get PDF
    En la medicina y otros campos relacionados cuando se va a estudiar un objeto natural, se toman imágenes de tomografía computarizada a través de varios cortes paralelos. Estos cortes se apilan en datos de volumen y se reconstruyen en modelos computacionales con el fin de estudiar la estructura de dicho objeto. Para construir con éxito modelos tridimensionales es importante la identificación y extracción precisa de todas las regiones que comprenden el objeto heterogéneo natural. Sin embargo, la construcción de modelos tridimensionales por medio del computador a partir de imágenes médicas sigue siendo un problema difícil y plantea dos problemas relacionados con las inexactitudes que surgen de, y son inherentes al proceso de adquisición de datos. El primer problema es la aparición de artefactos que distorsionan el límite entre las regiones. Este es un problema común en la generación de mallas a partir de imágenes médicas, también conocido como efecto de escalón. El segundo problema es la extracción de mallas suaves 3D que se ajustan a los límites de las región que conforman los objetos heterogéneos naturales descritos en las imágenes médicas. Para resolver estos problemas, se propone el método CAREM y el método RAM. El énfasis de esta investigación está puesto en la exactitud y fidelidad a la forma de las regiones necesaria en las aplicaciones biomédicas. Todas las regiones representadas de forma implícita que componen el objeto heterogéneo natural se utilizan para generar mallas adaptadas a los requisitos de los métodos de elementos finitos a través de un enfoque de modelado de ingeniería reversa, por lo tanto, estas regiones se consideran como un todo en lugar de piezas aisladas ensambladas.In medicine and other related fields when a natural object is going to be studied, computed tomography images are taken through several parallel slices. These slices are then stacked in volume data and reconstructed into 3D computer models. In order to successfully build 3D computer models of natural heterogeneous objects, accurate identification and extraction of all regions comprising the natural heterogeneous object is important. However, building 3D computer models of natural heterogeneous objects from medical images is still a challenging problem, and poses two issues related to the inaccuracies which arise from and are inherent to the data acquisition process. The first issue is the appearance of aliasing artifacts in the boundary between regions, a common issue in mesh generation from medical images, also known as stair-stepped artifacts. The second issue is the extraction of smooth 3D multi-region meshes that conform to the region boundaries of natural heterogeneous objects described in the medical images. To solve these issues, the CAREM method and the RAM method are proposed. The emphasis of this research is placed on accuracy and shape fidelity needed for biomedical applications. All implicitly represented regions composing the natural heterogeneous object are used to generate meshes adapted to the requirements of finite element methods through a reverse engineering modeling approach, thus these regions are considered as whole rather than loosely assembled parts.Doctor en IngenieríaDoctorad

    Hierarchical occlusion culling for arbitrarily-meshed height fields

    Get PDF
    Many graphics applications today have need for high-speed 3-D visualization of height fields. Most of these applications deal with the display of digital terrain models characterized by a simple, but vast, non-overlapping mesh of triangles. A great deal of research has been done to find methods of optimizing such systems. The goal of this work is to establish an algorithm to efficiently preprocess a hierarchical height field model that enables the real-time culling of occluded geometry while still allowing for classic terrain-rendering frameworks. By exploiting the planar-monotone characteristics of height fields, it is possible to create a unique and efficient occlusion culling method that is optimized for terrain rendering and similar applications. Previous work has shown that culling is possible with certain regularly-gridded height field models, but not until now has a system been shown to work with all height fields, regardless of how their meshes are constructed. By freeing the system of meshing restrictions, it is possible to incorporate a number of broader height field algorithms with widely-used applications such as flight simulators, GIS systems, and computer games

    Contribution to structural parameters computation: volume models and methods

    Get PDF
    Bio-CAD and in-silico experimentation are getting a growing interest in biomedical applications where scientific data coming from real samples are used to compute structural parameters that allow to evaluate physical properties. Non-invasive imaging acquisition technologies such as CT, mCT or MRI, plus the constant growth of computer capabilities, allow the acquisition, processing and visualization of scientific data with increasing degree of complexity. Structural parameters computation is based on the existence of two phases (or spaces) in the sample: the solid, which may correspond to the bone or material, and the empty or porous phase and, therefore, they are represented as binary volumes. The most common representation model for these datasets is the voxel model, which is the natural extension to 3D of 2D bitmaps. In this thesis, the Extreme Vertices Model (EVM) and a new proposed model, the Compact Union of Disjoint Boxes (CUDB), are used to represent binary volumes in a much more compact way. EVM stores only a sorted subset of vertices of the object¿s boundary whereas CUDB keeps a compact list of boxes. In this thesis, methods to compute the next structural parameters are proposed: pore-size distribution, connectivity, orientation, sphericity and roundness. The pore-size distribution helps to interpret the characteristics of porous samples by allowing users to observe most common pore diameter ranges as peaks in a graph. Connectivity is a topological property related to the genus of the solid space, measures the level of interconnectivity among elements, and is an indicator of the biomechanical characteristics of bone or other materials. The orientation of a shape can be defined by rotation angles around a set of orthogonal axes. Sphericity is a measure of how spherical is a particle, whereas roundness is the measure of the sharpness of a particle's edges and corners. The study of these parameters requires dealing with real samples scanned at high resolution, which usually generate huge datasets that require a lot of memory and large processing time to analyze them. For this reason, a new method to simplify binary volumes in a progressive and lossless way is presented. This method generates a level-of-detail sequence of objects, where each object is a bounding volume of the previous objects. Besides being used as support in the structural parameter computation, this method can be practical for task such as progressive transmission, collision detection and volume of interest computation. As part of multidisciplinary research, two practical applications have been developed to compute structural parameters of real samples. A software for automatic detection of characteristic viscosity points of basalt rocks and glasses samples, and another to compute sphericity and roundness of complex forms in a silica dataset.El Bio-Diseño Asistido por Computadora (Bio-CAD), y la experimentacion in-silico est an teniendo un creciente interes en aplicaciones biomedicas, en donde se utilizan datos cientificos provenientes de muestras reales para calcular par ametros estructurales que permiten evaluar propiedades físicas. Las tecnologías de adquisicion de imagen no invasivas como la TC, TC o IRM, y el crecimiento constante de las prestaciones de las computadoras, permiten la adquisicion, procesamiento y visualizacion de datos científicos con creciente grado de complejidad. El calculo de parametros estructurales esta basado en la existencia de dos fases (o espacios) en la muestra: la solida, que puede corresponder al hueso o material, y la fase porosa o vacía, por tanto, tales muestras son representadas como volumenes binarios. El modelo de representacion mas comun para estos conjuntos de datos es el modelo de voxeles, el cual es una extension natural a 3D de los mapas de bits 2D. En esta tesis se utilizan el modelo Extreme Verrtices Model (EVM) y un nuevo modelo propuesto, the Compact Union of Disjoint Boxes (CUDB), para representar los volumenes binarios en una forma mucho mas compacta. El modelo EVM almacena solo un subconjunto ordenado de vertices de la frontera del objeto mientras que el modelo CUDB mantiene una lista compacta de cajas. En esta tesis se proponen metodos para calcular los siguientes parametros estructurales: distribucion del tamaño de los poros, conectividad, orientacion, esfericidad y redondez. La distribucion del tamaño de los poros ayuda a interpretar las características de las muestras porosas permitiendo a los usuarios observar los rangos de diametro mas comunes de los poros mediante picos en un grafica. La conectividad es una propiedad topologica relacionada con el genero del espacio solido, mide el nivel de interconectividad entre los elementos, y es un indicador de las características biomecanicas del hueso o de otros materiales. La orientacion de un objeto puede ser definida por medio de angulos de rotacion alrededor de un conjunto de ejes ortogonales. La esfericidad es una medida de que tan esferica es una partícula, mientras que la redondez es la medida de la nitidez de sus aristas y esquinas. En el estudio de estos parametros se trabaja con muestras reales escaneadas a alta resolucion que suelen generar conjuntos de datos enormes, los cuales requieren una gran cantidad de memoria y mucho tiempo de procesamiento para ser analizados. Por esta razon, se presenta un nuevo metodo para simpli car vol umenes binarios de una manera progresiva y sin perdidas. Este metodo genera una secuencia de niveles de detalle de los objetos, en donde cada objeto es un volumen englobante de los objetos previos. Ademas de ser utilizado como apoyo en el calculo de parametros estructurales, este metodo puede ser de utilizado en otras tareas como transmision progresiva, deteccion de colisiones y calculo de volumen de interes. Como parte de una investigacion multidisciplinaria, se han desarrollado dos aplicaciones practicas para calcular parametros estructurales de muestras reales. Un software para la deteccion automatica de puntos de viscosidad característicos en muestras de rocas de basalto y vidrios, y una aplicacion para calcular la esfericidad y redondez de formas complejas en un conjunto de datos de sílice

    Lazy Image Processing: An Investigation into Applications of Lazy Functional Languages to Image Processing

    Get PDF
    The suitability of lazy functional languages for image processing applications is investigated by writing several image processing algorithms. The evaluation is done from an application programmer's point of view and the criteria include ease of writing and reading, and efficiency. Lazy functional languages are claimed to have the advantages that they are easy to write and read, as well as efficient. This is partly because these languages have mechanisms to improve modularity, such as higher-order functions. Also, they have the feature that no subexpression is evaluated until its value is required. Hence, unnecessary operations are automatically eliminated, and therefore programs can be executed efficiently. In image processing the amount of data handled is generally so large that much programming effort is typically spent in tasks such as managing memory and routine sequencing operations in order to improve efficiency. Therefore, lazy functional languages should be a good tool to write image processing applications. However, little practical or experimental evidence on this subject has been reported, since image processing has mostly been written in imperative languages. The discussion starts from the implementation of simple algorithms such as pointwise and local operations. It is shown that a large number of algorithms can be composed from a small number of higher-order functions. Then geometric transformations are implemented, for which lazy functional languages are considered to be particularly suitable. As for representations of images, lists and hierarchical data structures including binary trees and quadtrees are implemented. Through the discussion, it is demonstrated that the laziness of the languages improves modularity and efficiency. In particular, no pixel calculation is involved unless the user explicitly requests pixels, and consecutive transformations are straightforward and involve no quantisation errors. The other items discussed include: a method to combine pixel images and images expressed as continuous functions. Some benchmarks are also presented

    Analysis and Modular Approach for Text Extraction from Scientific Figures on Limited Data

    Get PDF
    Scientific figures are widely used as compact, comprehensible representations of important information. The re-usability of these figures is however limited, as one can rarely search directly for them, since they are mostly indexing by their surrounding text (e. g., publication or website) which often does not contain the full-message of the figure. In this thesis, the focus is on making the content of scientific figures accessible by extracting the text from these figures. A modular pipeline for unsupervised text extraction from scientific figures, based on a thorough analysis of the literature, was built to address the problem. This modular pipeline was used to build several unsupervised approaches, to evaluate different methods from the literature and new methods and method combinations. Some supervised approaches were built as well for comparison. One challenge, while evaluating the approaches, was the lack of annotated data, which especially needed to be considered when building the supervised approach. Three existing datasets were used for evaluation as well as two datasets of 241 scientific figures which were manually created and annotated. Additionally, two existing datasets for text extraction from other types of images were used for pretraining the supervised approach. Several experiments showed the superiority of the unsupervised pipeline over common Optical Character Recognition engines and identified the best unsupervised approach. This unsupervised approach was compared with the best supervised approach, which, despite of the limited amount of training data available, clearly outperformed the unsupervised approach.Infografiken sind ein viel verwendetes Medium zur kompakten Darstellung von Kernaussagen. Die Nachnutzbarkeit dieser Abbildungen ist jedoch häufig limitiert, da sie schlecht auffindbar sind, da sie meist über die umschließenden Medien, wie beispielsweise Publikationen oder Webseiten, und nicht über ihren Inhalt indexiert sind. Der Fokus dieser Arbeit liegt auf der Extraktion der textuellen Inhalte aus Infografiken, um deren Inhalt zu erschließen. Ausgehend von einer umfangreichen Analyse verwandter Arbeiten, wurde ein generalisierender, modularer Ansatz für die unüberwachte Textextraktion aus wissenschaftlichen Abbildungen entwickelt. Mit diesem modularen Ansatz wurden mehrere unüberwachte Ansätze und daneben auch noch einige überwachte Ansätze umgesetzt, um diverse Methoden aus der Literatur sowie neue und bisher noch nicht genutzte Methoden zu vergleichen. Eine Herausforderung bei der Evaluation war die geringe Menge an annotierten Abbildungen, was insbesondere beim überwachten Ansatz Methoden berücksichtigt werden musste. Für die Evaluation wurden drei existierende Datensätze verwendet und zudem wurden zusätzlich zwei Datensätze mit insgesamt 241 Infografiken erstellt und mit den nötigen Informationen annotiert, sodass insgesamt 5 Datensätze für die Evaluation verwendet werden konnten. Für das Pre-Training des überwachten Ansatzes wurden zudem zwei Datensätze aus verwandten Textextraktionsbereichen verwendet. In verschiedenen Experimenten wird gezeigt, dass der unüberwachte Ansatz besser funktioniert als klassische Texterkennungsverfahren und es wird aus den verschiedenen unüberwachten Ansätzen der beste ermittelt. Dieser unüberwachte Ansatz wird mit dem überwachten Ansatz verglichen, der trotz begrenzter Trainingsdaten die besten Ergebnisse liefert

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    Diamond-based models for scientific visualization

    Get PDF
    Hierarchical spatial decompositions are a basic modeling tool in a variety of application domains including scientific visualization, finite element analysis and shape modeling and analysis. A popular class of such approaches is based on the regular simplex bisection operator, which bisects simplices (e.g. line segments, triangles, tetrahedra) along the midpoint of a predetermined edge. Regular simplex bisection produces adaptive simplicial meshes of high geometric quality, while simplifying the extraction of crack-free, or conforming, approximations to the original dataset. Efficient multiresolution representations for such models have been achieved in 2D and 3D by clustering sets of simplices sharing the same bisection edge into structures called diamonds. In this thesis, we introduce several diamond-based approaches for scientific visualization. We first formalize the notion of diamonds in arbitrary dimensions in terms of two related simplicial decompositions of hypercubes. This enables us to enumerate the vertices, simplices, parents and children of a diamond. In particular, we identify the number of simplices involved in conforming updates to be factorial in the dimension and group these into a linear number of subclusters of simplices that are generated simultaneously. The latter form the basis for a compact pointerless representation for conforming meshes generated by regular simplex bisection and for efficiently navigating the topological connectivity of these meshes. Secondly, we introduce the supercube as a high-level primitive on such nested meshes based on the atomic units within the underlying triangulation grid. We propose the use of supercubes to associate information with coherent subsets of the full hierarchy and demonstrate the effectiveness of such a representation for modeling multiresolution terrain and volumetric datasets. Next, we introduce Isodiamond Hierarchies, a general framework for spatial access structures on a hierarchy of diamonds that exploits the implicit hierarchical and geometric relationships of the diamond model. We use an isodiamond hierarchy to encode irregular updates to a multiresolution isosurface or interval volume in terms of regular updates to diamonds. Finally, we consider nested hypercubic meshes, such as quadtrees, octrees and their higher dimensional analogues, through the lens of diamond hierarchies. This allows us to determine the relationships involved in generating balanced hypercubic meshes and to propose a compact pointerless representation of such meshes. We also provide a local diamond-based triangulation algorithm to generate high-quality conforming simplicial meshes

    GPU data structures for graphics and vision

    Get PDF
    Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by re-introducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.Graphikhardware wurde in den letzen Jahren immer weiter programmierbar. Ihre APIs verwenden das Streamprozessor-Modell, um die massive Parallelisierung auch für den Programmierer verfügbar zu machen. Leider folgen aus dem strikten Streamprozessor-Modell, welches die GPU für ihre hohe Rechenleistung benötigt, auch Hindernisse in der Portierung von CPU-Algorithmen zur Video- und Volumenverarbeitung auf die GPU. Serielle Datenabhängigkeiten beschleunigen zwar CPU-Verarbeitung, sind aber für die daten-parallele GPU kontraproduktiv . Diese Arbeit präsentiert neue Herangehensweisen für bekannte Probleme der Video- und Volumensverarbeitung. Teilweise wird die Verarbeitung mit Hilfe von modifizierten Algorithmen aus der frühen Computergraphik-Forschung an das beschränkte Hardwaremodell angepasst. Anderswo helfen neu entdeckte, hierarchische Datenstrukturen beim Umgang mit den Schreibzugriff-Restriktionen die lange die Portierung von komplexeren Bildanalyseverfahren verhindert hatten. In der 3D-Verarbeitung nutzen wir bekannte Konzepte aus der Computerspielegraphik wie Mipmaps, projektive Texturierung, oder verkettete Texturzugriffe, und zeigen auf welche Vorteile die Video- und Volumenverarbeitung aus hardwarebeschleunigter Graphik-API-Implementation ziehen kann. Die präsentierten GPU-Datenstrukturen bieten drastisch schnellere Verarbeitung und heben rechenintensive Operationen auf Echtzeit-Niveau. Damit werden neue, interaktive Bildverarbeitungs- und Graphik-Anwendungen möglich
    corecore