232 research outputs found

    Multi-Material Mesh Representation of Anatomical Structures for Deep Brain Stimulation Planning

    Get PDF
    The Dual Contouring algorithm (DC) is a grid-based process used to generate surface meshes from volumetric data. However, DC is unable to guarantee 2-manifold and watertight meshes due to the fact that it produces only one vertex for each grid cube. We present a modified Dual Contouring algorithm that is capable of overcoming this limitation. The proposed method decomposes an ambiguous grid cube into a set of tetrahedral cells and uses novel polygon generation rules that produce 2-manifold and watertight surface meshes with good-quality triangles. These meshes, being watertight and 2-manifold, are geometrically correct, and therefore can be used to initialize tetrahedral meshes. The 2-manifold DC method has been extended into the multi-material domain. Due to its multi-material nature, multi-material surface meshes will contain non-manifold elements along material interfaces or shared boundaries. The proposed multi-material DC algorithm can (1) generate multi-material surface meshes where each material sub-mesh is a 2-manifold and watertight mesh, (2) preserve the non-manifold elements along the material interfaces, and (3) ensure that the material interface or shared boundary between materials is consistent. The proposed method is used to generate multi-material surface meshes of deep brain anatomical structures from a digital atlas of the basal ganglia and thalamus. Although deep brain anatomical structures can be labeled as functionally separate, they are in fact continuous tracts of soft tissue in close proximity to each other. The multi-material meshes generated by the proposed DC algorithm can accurately represent the closely-packed deep brain structures as a single mesh consisting of multiple material sub-meshes. Each sub-mesh represents a distinct functional structure of the brain. Printed and/or digital atlases are important tools for medical research and surgical intervention. While these atlases can provide guidance in identifying anatomical structures, they do not take into account the wide variations in the shape and size of anatomical structures that occur from patient to patient. Accurate, patient-specific representations are especially important for surgical interventions like deep brain stimulation, where even small inaccuracies can result in dangerous complications. The last part of this research effort extends the discrete deformable 2-simplex mesh into the multi-material domain where geometry-based internal forces and image-based external forces are used in the deformation process. This multi-material deformable framework is used to segment anatomical structures of the deep brain region from Magnetic Resonance (MR) data

    Surface reconstruction for planning and navigation of liver resections

    Get PDF
    AbstractComputer-assisted systems for planning and navigation of liver resection procedures rely on the use of patient-specific 3D geometric models obtained from computed tomography. In this work, we propose the application of Poisson surface reconstruction (PSR) to obtain 3D models of the liver surface with applications to planning and navigation of liver surgery. In order to apply PSR, the introduction of an efficient transformation of the segmentation data, based on computation of gradient fields, is proposed. One of the advantages of PSR is that it requires only one control parameter, allowing the process to be fully automatic once the optimal value is estimated. Validation of our results is performed via comparison with 3D models obtained by state-of-art Marching Cubes incorporating Laplacian smoothing and decimation (MCSD). Our results show that PSR provides smooth liver models with better accuracy/complexity trade-off than those obtained by MCSD. After estimating the optimal parameter, automatic reconstruction of liver surfaces using PSR is achieved keeping similar processing time as MCSD. Models from this automatic approach show an average reduction of 79.59% of the polygons compared to the MCSD models presenting similar smoothness properties. Concerning visual quality, on one hand, and despite this reduction in polygons, clinicians perceive the quality of automatic PSR models to be the same as complex MCSD models. On the other hand, clinicians perceive a significant improvement on visual quality for automatic PSR models compared to optimal (obtained in terms of accuracy/complexity) MCSD models. The median reconstruction error using automatic PSR was as low as 1.03±0.23mm, which makes the method suitable for clinical applications. Automatic PSR is currently employed at Oslo University Hospital to obtain patient-specific liver models in selected patients undergoing laparoscopic liver resection

    Solid modelling for manufacturing: from Voelcker's boundary evaluation to discrete paradigms

    Get PDF
    Herb Voelcker and his research team laid the foundations of Solid Modelling, on which Computer-Aided Design is based. He founded the ambitious Production Automation Project, that included Constructive Solid Geometry (CSG) as the basic 3D geometric representation. CSG trees were compact and robust, saving a memory space that was scarce in those times. But the main computational problem was Boundary Evaluation: the process of converting CSG trees to Boundary Representations (BReps) with explicit faces, edges and vertices for manufacturing and visualization purposes. This paper presents some glimpses of the history and evolution of some ideas that started with Herb Voelcker. We briefly describe the path from “localization and boundary evaluation” to “localization and printing”, with many intermediate steps driven by hardware, software and new mathematical tools: voxel and volume representations, triangle meshes, and many others, observing also that in some applications, voxel models no longer require Boundary Evaluation. In this last case, we consider the current research challenges and discuss several avenues for further research.Project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/501100011033/FEDER‘‘A way to make Europe’’Peer ReviewedPostprint (published version

    Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

    Get PDF
    Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L1_1 regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.Comment: 24 pages, 11 figures. Reviced version includes revised figures and improved clarit

    Three-dimensional modeling of natural heterogeneous objects

    Get PDF
    En la medicina y otros campos relacionados cuando se va a estudiar un objeto natural, se toman imágenes de tomografía computarizada a través de varios cortes paralelos. Estos cortes se apilan en datos de volumen y se reconstruyen en modelos computacionales con el fin de estudiar la estructura de dicho objeto. Para construir con éxito modelos tridimensionales es importante la identificación y extracción precisa de todas las regiones que comprenden el objeto heterogéneo natural. Sin embargo, la construcción de modelos tridimensionales por medio del computador a partir de imágenes médicas sigue siendo un problema difícil y plantea dos problemas relacionados con las inexactitudes que surgen de, y son inherentes al proceso de adquisición de datos. El primer problema es la aparición de artefactos que distorsionan el límite entre las regiones. Este es un problema común en la generación de mallas a partir de imágenes médicas, también conocido como efecto de escalón. El segundo problema es la extracción de mallas suaves 3D que se ajustan a los límites de las región que conforman los objetos heterogéneos naturales descritos en las imágenes médicas. Para resolver estos problemas, se propone el método CAREM y el método RAM. El énfasis de esta investigación está puesto en la exactitud y fidelidad a la forma de las regiones necesaria en las aplicaciones biomédicas. Todas las regiones representadas de forma implícita que componen el objeto heterogéneo natural se utilizan para generar mallas adaptadas a los requisitos de los métodos de elementos finitos a través de un enfoque de modelado de ingeniería reversa, por lo tanto, estas regiones se consideran como un todo en lugar de piezas aisladas ensambladas.In medicine and other related fields when a natural object is going to be studied, computed tomography images are taken through several parallel slices. These slices are then stacked in volume data and reconstructed into 3D computer models. In order to successfully build 3D computer models of natural heterogeneous objects, accurate identification and extraction of all regions comprising the natural heterogeneous object is important. However, building 3D computer models of natural heterogeneous objects from medical images is still a challenging problem, and poses two issues related to the inaccuracies which arise from and are inherent to the data acquisition process. The first issue is the appearance of aliasing artifacts in the boundary between regions, a common issue in mesh generation from medical images, also known as stair-stepped artifacts. The second issue is the extraction of smooth 3D multi-region meshes that conform to the region boundaries of natural heterogeneous objects described in the medical images. To solve these issues, the CAREM method and the RAM method are proposed. The emphasis of this research is placed on accuracy and shape fidelity needed for biomedical applications. All implicitly represented regions composing the natural heterogeneous object are used to generate meshes adapted to the requirements of finite element methods through a reverse engineering modeling approach, thus these regions are considered as whole rather than loosely assembled parts.Doctor en IngenieríaDoctorad

    Developing Efficient High-Order Transport Schemes for Cross-Scale Coupled Estuary-Ocean Modeling

    Get PDF
    Geophysical fluid dynamics (GFD) models have progressed greatly in simulating the world’s oceans and estuaries in the past three decades, thanks to the development of novel numerical algorithms and the advent of massively parallel high-performance computing platforms. Study of inter-related processes on multi-scales (e.g., between large-scale (remote) processes and small-scale (local) processes) has always been an important theme for GFD modeling. For this purpose, models based on unstructured-grid (UG) have shown great potential because of their superior abilities in enabling multi-resolution and in fitting geometry and boundary. Despite UG models’ successful applications on coastal systems, significant obstacles still exist that have so far prevented UG models from realizing their full cross-scale capability. The pressing issues include the computation overhead resulting from large contrasts in the spatial resolutions, and the relative lack of skill for UG model in the eddying regime. Specifically for our own implicit UG model (SCHISM), the transport solver often emerges as a major bottleneck for both accuracy and efficiency. The overall goal of this dissertation is two-fold. The first goal is to address the challenges in tracer transport by developing efficient high-order schemes for the transport processes and test them in the framework of a community supported modeling system (SCHISM: Semi-implicit Cross-scale Hydroscience Integrated System Model) for cross-scale processes. The second goal is to utilize the new schemes developed in this dissertation and elsewhere to build a bona fide cross-scale Chesapeake Bay model and use it to address some key knowledge gaps in the physical processes in this system and to better assist decision makers of coastal resource management. The work on numerical scheme development has resulted in two new high-order transport solvers. The first solver tackles the vertical transport that often imposes the most stringent constraint on model efficiency (Chapter 2). With an implicit method and two flux limiters in both space and time, the new TVD2 solver leads to a speed-up of 1.6-6.0 in various cross-scale applications as compared to traditional explicit methods, while achieving 2nd-order accuracy in both space and time. Together with a flexible vertical gridding system, the flow over steep slopes can be faithfully simulated efficiently and accurately without altering the underlying bathymetry. The second scheme aims at improving the model skill in the eddying ocean (Chapter 4). UG coastal models tend to under-resolve features like meso-scale eddies and meanders, and this issue is partially attributed to the numerical diffusion in the transport schemes that are originally developed for estuarine applications. to address this issue, a 3rd-order transport scheme based on WENO formulation is developed, and is demonstrated to improve the meso-scale features. The new solvers are then tested in the Chesapeake Bay and adjacent Atlantic Ocean on small, medium and large domains respectively, corresponding to the three main chapters of this dissertation (Chapter 2-4), with an ultimate goal of achieving a seamless cross-scale model from the Gulf Stream to the shallow regions in the Bay tributaries and sub-tributaries. We highlight the dominant role played by the bathymetry in nearshore systems and the detrimental effects of bathymetric smoothing commonly used in many coastal models (Chapter 3). With the new methods developed in this dissertation and elsewhere, the model has enabled the analyses on some important processes that are hard to quantify with traditional techniques, e.g., the effect of channel-shoal contrast on lateral circulation and salinity distribution, hypoxia volume, the influence of realistic bathymetry on the freshwater plume etc. Potential topics for future research are also discussed at the end. In addition, the new solvers have also been successfully exported to many other oceanic and nearshore systems around the world via user groups of our community modeling system (cf. ‘Publications’ under ‘schism.wiki’)

    Real-time quality visualization of medical models on commodity and mobile devices

    Get PDF
    This thesis concerns the specific field of visualization of medical models using commodity and mobile devices. Mechanisms for medical imaging acquisition such as MRI, CT, and micro-CT scanners are continuously evolving, up to the point of obtaining volume datasets of large resolutions (> 512^3). As these datasets grow in resolution, its treatment and visualization become more and more expensive due to their computational requirements. For this reason, special techniques such as data pre-processing (filtering, construction of multi-resolution structures, etc.) and sophisticated algorithms have to be introduced in different points of the visualization pipeline to achieve the best visual quality without compromising performance times. The problem of managing big datasets comes from the fact that we have limited computational resources. Not long ago, the only physicians that were rendering volumes were radiologists. Nowadays, the outcome of diagnosis is the data itself, and medical doctors need to render them in commodity PCs (even patients may want to render the data, and the DVDs are commonly accompanied with a DICOM viewer software). Furthermore, with the increasing use of technology in daily clinical tasks, small devices such as mobile phones and tablets can fit the needs of medical doctors in some specific areas. Visualizing diagnosis images of patients becomes more challenging when it comes to using these devices instead of desktop computers, as they generally have more restrictive hardware specifications. The goal of this Ph.D. thesis is the real-time, quality visualization of medium to large medical volume datasets (resolutions >= 512^3 voxels) on mobile phones and commodity devices. To address this problem, we use multiresolution techniques that apply downsampling techniques on the full resolution datasets to produce coarser representations which are easier to handle. We have focused our efforts on the application of Volume Visualization in the clinical practice, so we have a particular interest in creating solutions that require short pre-processing times that quickly provide the specialists with the data outcome, maximize the preservation of features and the visual quality of the final images, achieve high frame rates that allow interactive visualizations, and make efficient use of the computational resources. The contributions achieved during this thesis comprise improvements in several stages of the visualization pipeline. The techniques we propose are located in the stages of multi-resolution generation, transfer function design and the GPU ray casting algorithm itself.Esta tesis se centra en la visualización de modelos médicos de volumen en dispositivos móviles y de bajas prestaciones. Los sistemas médicos de captación tales como escáners MRI, CT y micro-CT, están en constante evolución, hasta el punto de obtener modelos de volumen de gran resolución (> 512^3). A medida que estos datos crecen en resolución, su manejo y visualización se vuelve más y más costoso debido a sus requisitos computacionales. Por este motivo, técnicas especiales como el pre-proceso de datos (filtrado, construcción de estructuras multiresolución, etc.) y algoritmos específicos se tienen que introducir en diferentes puntos de la pipeline de visualización para conseguir la mejor calidad visual posible sin comprometer el rendimiento. El problema que supone manejar grandes volumenes de datos es debido a que tenemos recursos computacionales limitados. Hace no mucho, las únicas personas en el ámbito médico que visualizaban datos de volumen eran los radiólogos. Hoy en día, el resultado de la diagnosis son los datos en sí, y los médicos necesitan renderizar estos datos en PCs de características modestas (incluso los pacientes pueden querer visualizar estos datos, pues los DVDs con los resultados suelen venir acompañados de un visor de imágenes DICOM). Además, con el reciente aumento del uso de las tecnologías en la clínica práctica habitual, dispositivos pequeños como teléfonos móviles o tablets son los más convenientes en algunos casos. La visualización de volumen es más difícil en este tipo de dispositivos que en equipos de sobremesa, pues las limitaciones de su hardware son superiores. El objetivo de esta tesis doctoral es la visualización de calidad en tiempo real de modelos grandes de volumen (resoluciones >= 512^3 voxels) en teléfonos móviles y dispositivos de bajas prestaciones. Para enfrentarnos a este problema, utilizamos técnicas multiresolución que aplican técnicas de reducción de datos a los modelos en resolución original, para así obtener modelos de menor resolución. Hemos centrado nuestros esfuerzos en la aplicación de la visualización de volumen en la práctica clínica, así que tenemos especial interés en diseñar soluciones que requieran cortos tiempos de pre-proceso para que los especialistas tengan rápidamente los resultados a su disposición. También, queremos maximizar la conservación de detalles de interés y la calidad de las imágenes finales, conseguir frame rates altos que faciliten visualizaciones interactivas y que hagan un uso eficiente de los recursos computacionales. Las contribuciones aportadas por esta tesis són mejoras en varias etapas de la pipeline de visualización. Las técnicas que proponemos se situan en las etapas de generación de la estructura multiresolución, el diseño de la función de transferencia y el algoritmo de ray casting en la GPU

    Real-time quality visualization of medical models on commodity and mobile devices

    Get PDF
    This thesis concerns the specific field of visualization of medical models using commodity and mobile devices. Mechanisms for medical imaging acquisition such as MRI, CT, and micro-CT scanners are continuously evolving, up to the point of obtaining volume datasets of large resolutions (> 512^3). As these datasets grow in resolution, its treatment and visualization become more and more expensive due to their computational requirements. For this reason, special techniques such as data pre-processing (filtering, construction of multi-resolution structures, etc.) and sophisticated algorithms have to be introduced in different points of the visualization pipeline to achieve the best visual quality without compromising performance times. The problem of managing big datasets comes from the fact that we have limited computational resources. Not long ago, the only physicians that were rendering volumes were radiologists. Nowadays, the outcome of diagnosis is the data itself, and medical doctors need to render them in commodity PCs (even patients may want to render the data, and the DVDs are commonly accompanied with a DICOM viewer software). Furthermore, with the increasing use of technology in daily clinical tasks, small devices such as mobile phones and tablets can fit the needs of medical doctors in some specific areas. Visualizing diagnosis images of patients becomes more challenging when it comes to using these devices instead of desktop computers, as they generally have more restrictive hardware specifications. The goal of this Ph.D. thesis is the real-time, quality visualization of medium to large medical volume datasets (resolutions >= 512^3 voxels) on mobile phones and commodity devices. To address this problem, we use multiresolution techniques that apply downsampling techniques on the full resolution datasets to produce coarser representations which are easier to handle. We have focused our efforts on the application of Volume Visualization in the clinical practice, so we have a particular interest in creating solutions that require short pre-processing times that quickly provide the specialists with the data outcome, maximize the preservation of features and the visual quality of the final images, achieve high frame rates that allow interactive visualizations, and make efficient use of the computational resources. The contributions achieved during this thesis comprise improvements in several stages of the visualization pipeline. The techniques we propose are located in the stages of multi-resolution generation, transfer function design and the GPU ray casting algorithm itself.Esta tesis se centra en la visualización de modelos médicos de volumen en dispositivos móviles y de bajas prestaciones. Los sistemas médicos de captación tales como escáners MRI, CT y micro-CT, están en constante evolución, hasta el punto de obtener modelos de volumen de gran resolución (> 512^3). A medida que estos datos crecen en resolución, su manejo y visualización se vuelve más y más costoso debido a sus requisitos computacionales. Por este motivo, técnicas especiales como el pre-proceso de datos (filtrado, construcción de estructuras multiresolución, etc.) y algoritmos específicos se tienen que introducir en diferentes puntos de la pipeline de visualización para conseguir la mejor calidad visual posible sin comprometer el rendimiento. El problema que supone manejar grandes volumenes de datos es debido a que tenemos recursos computacionales limitados. Hace no mucho, las únicas personas en el ámbito médico que visualizaban datos de volumen eran los radiólogos. Hoy en día, el resultado de la diagnosis son los datos en sí, y los médicos necesitan renderizar estos datos en PCs de características modestas (incluso los pacientes pueden querer visualizar estos datos, pues los DVDs con los resultados suelen venir acompañados de un visor de imágenes DICOM). Además, con el reciente aumento del uso de las tecnologías en la clínica práctica habitual, dispositivos pequeños como teléfonos móviles o tablets son los más convenientes en algunos casos. La visualización de volumen es más difícil en este tipo de dispositivos que en equipos de sobremesa, pues las limitaciones de su hardware son superiores. El objetivo de esta tesis doctoral es la visualización de calidad en tiempo real de modelos grandes de volumen (resoluciones >= 512^3 voxels) en teléfonos móviles y dispositivos de bajas prestaciones. Para enfrentarnos a este problema, utilizamos técnicas multiresolución que aplican técnicas de reducción de datos a los modelos en resolución original, para así obtener modelos de menor resolución. Hemos centrado nuestros esfuerzos en la aplicación de la visualización de volumen en la práctica clínica, así que tenemos especial interés en diseñar soluciones que requieran cortos tiempos de pre-proceso para que los especialistas tengan rápidamente los resultados a su disposición. También, queremos maximizar la conservación de detalles de interés y la calidad de las imágenes finales, conseguir frame rates altos que faciliten visualizaciones interactivas y que hagan un uso eficiente de los recursos computacionales. Las contribuciones aportadas por esta tesis són mejoras en varias etapas de la pipeline de visualización. Las técnicas que proponemos se situan en las etapas de generación de la estructura multiresolución, el diseño de la función de transferencia y el algoritmo de ray casting en la GPU.Postprint (published version
    corecore