13 research outputs found

    Contribution to structural parameters computation: volume models and methods

    Get PDF
    Bio-CAD and in-silico experimentation are getting a growing interest in biomedical applications where scientific data coming from real samples are used to compute structural parameters that allow to evaluate physical properties. Non-invasive imaging acquisition technologies such as CT, mCT or MRI, plus the constant growth of computer capabilities, allow the acquisition, processing and visualization of scientific data with increasing degree of complexity. Structural parameters computation is based on the existence of two phases (or spaces) in the sample: the solid, which may correspond to the bone or material, and the empty or porous phase and, therefore, they are represented as binary volumes. The most common representation model for these datasets is the voxel model, which is the natural extension to 3D of 2D bitmaps. In this thesis, the Extreme Vertices Model (EVM) and a new proposed model, the Compact Union of Disjoint Boxes (CUDB), are used to represent binary volumes in a much more compact way. EVM stores only a sorted subset of vertices of the object¿s boundary whereas CUDB keeps a compact list of boxes. In this thesis, methods to compute the next structural parameters are proposed: pore-size distribution, connectivity, orientation, sphericity and roundness. The pore-size distribution helps to interpret the characteristics of porous samples by allowing users to observe most common pore diameter ranges as peaks in a graph. Connectivity is a topological property related to the genus of the solid space, measures the level of interconnectivity among elements, and is an indicator of the biomechanical characteristics of bone or other materials. The orientation of a shape can be defined by rotation angles around a set of orthogonal axes. Sphericity is a measure of how spherical is a particle, whereas roundness is the measure of the sharpness of a particle's edges and corners. The study of these parameters requires dealing with real samples scanned at high resolution, which usually generate huge datasets that require a lot of memory and large processing time to analyze them. For this reason, a new method to simplify binary volumes in a progressive and lossless way is presented. This method generates a level-of-detail sequence of objects, where each object is a bounding volume of the previous objects. Besides being used as support in the structural parameter computation, this method can be practical for task such as progressive transmission, collision detection and volume of interest computation. As part of multidisciplinary research, two practical applications have been developed to compute structural parameters of real samples. A software for automatic detection of characteristic viscosity points of basalt rocks and glasses samples, and another to compute sphericity and roundness of complex forms in a silica dataset.El Bio-Diseño Asistido por Computadora (Bio-CAD), y la experimentacion in-silico est an teniendo un creciente interes en aplicaciones biomedicas, en donde se utilizan datos cientificos provenientes de muestras reales para calcular par ametros estructurales que permiten evaluar propiedades físicas. Las tecnologías de adquisicion de imagen no invasivas como la TC, TC o IRM, y el crecimiento constante de las prestaciones de las computadoras, permiten la adquisicion, procesamiento y visualizacion de datos científicos con creciente grado de complejidad. El calculo de parametros estructurales esta basado en la existencia de dos fases (o espacios) en la muestra: la solida, que puede corresponder al hueso o material, y la fase porosa o vacía, por tanto, tales muestras son representadas como volumenes binarios. El modelo de representacion mas comun para estos conjuntos de datos es el modelo de voxeles, el cual es una extension natural a 3D de los mapas de bits 2D. En esta tesis se utilizan el modelo Extreme Verrtices Model (EVM) y un nuevo modelo propuesto, the Compact Union of Disjoint Boxes (CUDB), para representar los volumenes binarios en una forma mucho mas compacta. El modelo EVM almacena solo un subconjunto ordenado de vertices de la frontera del objeto mientras que el modelo CUDB mantiene una lista compacta de cajas. En esta tesis se proponen metodos para calcular los siguientes parametros estructurales: distribucion del tamaño de los poros, conectividad, orientacion, esfericidad y redondez. La distribucion del tamaño de los poros ayuda a interpretar las características de las muestras porosas permitiendo a los usuarios observar los rangos de diametro mas comunes de los poros mediante picos en un grafica. La conectividad es una propiedad topologica relacionada con el genero del espacio solido, mide el nivel de interconectividad entre los elementos, y es un indicador de las características biomecanicas del hueso o de otros materiales. La orientacion de un objeto puede ser definida por medio de angulos de rotacion alrededor de un conjunto de ejes ortogonales. La esfericidad es una medida de que tan esferica es una partícula, mientras que la redondez es la medida de la nitidez de sus aristas y esquinas. En el estudio de estos parametros se trabaja con muestras reales escaneadas a alta resolucion que suelen generar conjuntos de datos enormes, los cuales requieren una gran cantidad de memoria y mucho tiempo de procesamiento para ser analizados. Por esta razon, se presenta un nuevo metodo para simpli car vol umenes binarios de una manera progresiva y sin perdidas. Este metodo genera una secuencia de niveles de detalle de los objetos, en donde cada objeto es un volumen englobante de los objetos previos. Ademas de ser utilizado como apoyo en el calculo de parametros estructurales, este metodo puede ser de utilizado en otras tareas como transmision progresiva, deteccion de colisiones y calculo de volumen de interes. Como parte de una investigacion multidisciplinaria, se han desarrollado dos aplicaciones practicas para calcular parametros estructurales de muestras reales. Un software para la deteccion automatica de puntos de viscosidad característicos en muestras de rocas de basalto y vidrios, y una aplicacion para calcular la esfericidad y redondez de formas complejas en un conjunto de datos de sílice

    Physiological system modelling

    Get PDF
    Computer graphics has a major impact in our day-to-day life. It is used in diverse areas such as displaying the results of engineering and scientific computations and visualization, producing television commercials and feature films, simulation and analysis of real world problems, computer aided design, graphical user interfaces that increases the communication bandwidth between humans and machines, etc Scientific visualization is a well-established method for analysis of data, originating from scientific computations, simulations or measurements. The development and implementation of the 3Dgen software was developed by the author using OpenGL and C language was presented in this report 3Dgen was used to visualize threedimensional cylindrical models such as pipes and also for limited usage in virtual endoscopy. Using the developed software a model was created using the centreline data input by the user or from the output of some other program, stored in a normal text file. The model was constructed by drawing surface polygons between two adjacent centreline points. The software allows the user to view the internal and external surfaces of the model. The software was designed in such a way that it runs in more than one operating systems with minimal installation procedures Since the size of the software is very small it can be stored in a 1 44 Megabyte floppy diskette. Depending on the processing speed of the PC the software can generate models of any length and size Compared to other packages, 3Dgen has minimal input procedures was able to generate models with smooth bends. It has both modelling and virtual exploration features. For models with sharp bends the software generates an overshoot

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    Point based graphics rendering with unified scalability solutions.

    Get PDF
    Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support
    corecore