85 research outputs found

    The SHAPE Lab: New Technology and Software for Archaeologists

    Get PDF

    Isosurfaces and level-set surface models

    Get PDF
    technical reportThis paper is a set of notes that present the basic geometry of isosurfaces and the basic methods for using level sets to model deformable surfaces. It begins with a short introduction to isosurface geometry, including curvature. It continues with a short explanation of the level-set partial differential equations. It also presents some practical details for how to solve these equations using up-wind scheme and sparse calculation methods. This paper presents a series of examples of how level-set surface models are used to solve problems in graphics and vision. Finally, it presents some examples of implementations using VISPack, an object oriented, C++ library for doing volume processing and level-set surface modeling

    Computational Topology Methods for Shape Modelling Applications

    Get PDF
    This thesis deals with computational topology, a recent branch of research that involves both mathematics and computer science, and tackles the problem of discretizing the Morse theory to functions defined on a triangle mesh. The application context of Morse theory in general, and Reeb graphs in particular, deals with the analysis of geometric shapes and the extraction of skeletal structures that synthetically represents shape, preserving the topological properties and the main morphological characteristics. Regarding Computer Graphics, shapes, that is a one-, two- or higher- dimensional connected, compact space having a visual appearance, are typically approximated by digital models. Since topology focuses on the qualitative properties of spaces, such as the connectedness and how many and what type of holes it has, topology is the best tool to describe the shape of a mathematical model at a high level of abstraction. Geometry, conversely, is mainly related to the quantitative characteristics of a shape. Thus, the combination of topology and geometry creates a new generation of tools that provide a computational description of the most representative features of the shape along with their relationship. Extracting qualitative information, that is the information related to semantic of the shape and its morphological structure, from discrete models is a central goal in shape modeling. In this thesis a conceptual model is proposed which represents a given surface based on topological coding that defines a sketch of the surface, discarding irrelevant details and classifying its topological type. The approach is based on Morse theory and Reeb graphs, which provide a very useful shape abstraction method for the analysis and structuring of the information contained in the geometry of the discrete shape model. To fully develop the method, both theoretical and computational aspects have been considered, related to the definition and the extension of the Reeb graph to the discrete domain. For the definition and automatic construction of the conceptual model, a new method has been developed that analyzes and characterizes a triangle mesh with respect to the behavior of a real and at least continuous function defined on the mesh. The proposed solution handles also degenerate critical points, such as non-isolated critical points. To do that, the surface model is characterized using a contour-based strategy, recognizing critical areas instead of critical points and coding the evolution of the contour levels in a graph-like structure, named Extended Reeb Graph, (ERG), which is a high-level abstract model suitable for representing and manipulating piece-wise linear surfaces. The descriptive power of the (ERG) has been also augmented with the introduction of geometric information together with the topological ones, and it has been also studied the relation between the extracted topological and morphological features with respect to the real characteristics of the surface, giving and evaluation of the dimension of the discarded details. Finally, the effectiveness of our description framework has been evaluated in several application contexts

    Computational modelling of the human heart and multiscale simulation of its electrophysiological activity aimed at the treatment of cardiac arrhythmias related to ischaemia and Infarction

    Full text link
    [ES] Las enfermedades cardiovasculares constituyen la principal causa de morbilidad y mortalidad a nivel mundial, causando en torno a 18 millones de muertes cada año. De entre ellas, la más común es la enfermedad isquémica cardíaca, habitualmente denominada como infarto de miocardio (IM). Tras superar un IM, un considerable número de pacientes desarrollan taquicardias ventriculares (TV) potencialmente mortales durante la fase crónica del IM, es decir, semanas, meses o incluso años después la fase aguda inicial. Este tipo concreto de TV normalmente se origina por una reentrada a través de canales de conducción (CC), filamentos de miocardio superviviente que atraviesan la cicatriz del infarto fibrosa y no conductora. Cuando los fármacos anti-arrítmicos resultan incapaces de evitar episodios recurrentes de TV, la ablación por radiofrecuencia (ARF), un procedimiento mínimamente invasivo realizado mediante cateterismo en el laboratorio de electrofisiología (EF), se usa habitualmente para interrumpir de manera permanente la propagación eléctrica a través de los CCs responsables de la TV. Sin embargo, además de ser invasivo, arriesgado y requerir mucho tiempo, en casos de TVs relacionadas con IM crónico, hasta un 50% de los pacientes continúa padeciendo episodios recurrentes de TV tras el procedimiento de ARF. Por tanto, existe la necesidad de desarrollar nuevas estrategias pre-procedimiento para mejorar la planificación de la ARF y, de ese modo, aumentar esta tasa de éxito relativamente baja. En primer lugar, realizamos una revisión exhaustiva de la literatura referente a los modelos cardiacos 3D existentes, con el fin de obtener un profundo conocimiento de sus principales características y los métodos usados en su construcción, con especial atención sobre los modelos orientados a simulación de EF cardíaca. Luego, usando datos clínicos de un paciente con historial de TV relacionada con infarto, diseñamos e implementamos una serie de estrategias y metodologías para (1) generar modelos computacionales 3D específicos de paciente de ventrículos infartados que puedan usarse para realizar simulaciones de EF cardíaca a nivel de órgano, incluyendo la cicatriz del infarto y la región circundante conocida como zona de borde (ZB); (2) construir modelos 3D de torso que permitan la obtención del ECG simulado; y (3) llevar a cabo estudios in-silico de EF personalizados y pre-procedimiento, tratando de replicar los verdaderos estudios de EF realizados en el laboratorio de EF antes de la ablación. La finalidad de estas metodologías es la de localizar los CCs en el modelo ventricular 3D para ayudar a definir los objetivos de ablación óptimos para el procedimiento de ARF. Por último, realizamos el estudio retrospectivo por simulación de un caso, en el que logramos inducir la TV reentrante relacionada con el infarto usando diferentes configuraciones de modelado para la ZB. Validamos nuestros resultados mediante la reproducción, con una precisión razonable, del ECG del paciente en TV, así como en ritmo sinusal a partir de los mapas de activación endocárdica obtenidos invasivamente mediante sistemas de mapeado electroanatómico en este último caso. Esto permitió encontrar la ubicación y analizar las características del CC responsable de la TV clínica. Cabe destacar que dicho estudio in-silico de EF podría haberse efectuado antes del procedimiento de ARF, puesto que nuestro planteamiento está completamente basado en datos clínicos no invasivos adquiridos antes de la intervención real. Estos resultados confirman la viabilidad de la realización de estudios in-silico de EF personalizados y pre-procedimiento de utilidad, así como el potencial del abordaje propuesto para llegar a ser en un futuro una herramienta de apoyo para la planificación de la ARF en casos de TVs reentrantes relacionadas con infarto. No obstante, la metodología propuesta requiere de notables mejoras y validación por medio de es[CA] Les malalties cardiovasculars constitueixen la principal causa de morbiditat i mortalitat a nivell mundial, causant entorn a 18 milions de morts cada any. De elles, la més comuna és la malaltia isquèmica cardíaca, habitualment denominada infart de miocardi (IM). Després de superar un IM, un considerable nombre de pacients desenvolupen taquicàrdies ventriculars (TV) potencialment mortals durant la fase crònica de l'IM, és a dir, setmanes, mesos i fins i tot anys després de la fase aguda inicial. Aquest tipus concret de TV normalment s'origina per una reentrada a través dels canals de conducció (CC), filaments de miocardi supervivent que travessen la cicatriu de l'infart fibrosa i no conductora. Quan els fàrmacs anti-arítmics resulten incapaços d'evitar episodis recurrents de TV, l'ablació per radiofreqüència (ARF), un procediment mínimament invasiu realitzat mitjançant cateterisme en el laboratori de electrofisiologia (EF), s'usa habitualment per a interrompre de manera permanent la propagació elèctrica a través dels CCs responsables de la TV. No obstant això, a més de ser invasiu, arriscat i requerir molt de temps, en casos de TVs relacionades amb IM crònic fins a un 50% dels pacients continua patint episodis recurrents de TV després del procediment d'ARF. Per tant, existeix la necessitat de desenvolupar noves estratègies pre-procediment per a millorar la planificació de l'ARF i, d'aquesta manera, augmentar la taxa d'èxit, que es relativament baixa. En primer lloc, realitzem una revisió exhaustiva de la literatura referent als models cardíacs 3D existents, amb la finalitat d'obtindre un profund coneixement de les seues principals característiques i els mètodes usats en la seua construcció, amb especial atenció sobre els models orientats a simulació de EF cardíaca. Posteriorment, usant dades clíniques d'un pacient amb historial de TV relacionada amb infart, dissenyem i implementem una sèrie d'estratègies i metodologies per a (1) generar models computacionals 3D específics de pacient de ventricles infartats capaços de realitzar simulacions de EF cardíaca a nivell d'òrgan, incloent la cicatriu de l'infart i la regió circumdant coneguda com a zona de vora (ZV); (2) construir models 3D de tors que permeten l'obtenció del ECG simulat; i (3) dur a terme estudis in-silico de EF personalitzats i pre-procediment, tractant de replicar els vertaders estudis de EF realitzats en el laboratori de EF abans de l'ablació. La finalitat d'aquestes metodologies és la de localitzar els CCs en el model ventricular 3D per a ajudar a definir els objectius d'ablació òptims per al procediment d'ARF. Finalment, a manera de prova de concepte, realitzem l'estudi retrospectiu per simulació d'un cas, en el qual aconseguim induir la TV reentrant relacionada amb l'infart usant diferents configuracions de modelatge per a la ZV. Validem els nostres resultats mitjançant la reproducció, amb una precisió raonable, del ECG del pacient en TV, així com en ritme sinusal a partir dels mapes d'activació endocardíac obtinguts invasivament mitjançant sistemes de mapatge electro-anatòmic en aquest últim cas. Això va permetre trobar la ubicació i analitzar les característiques del CC responsable de la TV clínica. Cal destacar que aquest estudi in-silico de EF podria haver-se efectuat abans del procediment d'ARF, ja que el nostre plantejament està completament basat en dades clíniques no invasius adquirits abans de la intervenció real. Aquests resultats confirmen la viabilitat de la realització d'estudis in-silico de EF personalitzats i pre-procediment d'utilitat, així com el potencial de l'abordatge proposat per a arribar a ser en un futur una eina de suport per a la planificació de l'ARF en casos de TVs reentrants relacionades amb infart. No obstant això, la metodologia proposada requereix de notables millores i validació per mitjà d'estudis de simulació amb grans cohorts de pacients.[EN] Cardiovascular diseases represent the main cause of morbidity and mortality worldwide, causing around 18 million deaths every year. Among these diseases, the most common one is the ischaemic heart disease, usually referred to as myocardial infarction (MI). After surviving to a MI, a considerable number of patients develop life-threatening ventricular tachycardias (VT) during the chronic stage of the MI, that is, weeks, months or even years after the initial acute phase. This particular type of VT is typically sustained by reentry through slow conducting channels (CC), which are filaments of surviving myocardium that cross the non-conducting fibrotic infarct scar. When anti-arrhythmic drugs are unable to prevent recurrent VT episodes, radiofrequency ablation (RFA), a minimally invasive procedure performed by catheterization in the electrophysiology (EP) laboratory, is commonly used to interrupt the electrical conduction through the CCs responsible for the VT permanently. However, besides being invasive, risky and time-consuming, in the cases of VTs related to chronic MI, up to 50% of patients continue suffering from recurrent VT episodes after the RFA procedure. Therefore, there exists a need to develop novel pre-procedural strategies to improve RFA planning and, thereby, increase this relatively low success rate. First, we conducted an exhaustive review of the literature associated with the existing 3D cardiac models in order to gain a deep knowledge about their main features and the methods used for their construction, with special focus on those models oriented to simulation of cardiac EP. Later, using a clinical dataset of a chronically infarcted patient with a history of infarct-related VT, we designed and implemented a number of strategies and methodologies to (1) build patient-specific 3D computational models of infarcted ventricles that can be used to perform simulations of cardiac EP at the organ level, including the infarct scar and the surrounding region known as border zone (BZ); (2) construct 3D torso models that enable to compute the simulated ECG; and (3) carry out pre-procedural personalized in-silico EP studies, trying to replicate the actual EP studies conducted in the EP laboratory prior to the ablation. The goal of these methodologies is to allow locating the CCs into the 3D ventricular model in order to help in defining the optimal ablation targets for the RFA procedure. Lastly, as a proof-of-concept, we performed a retrospective simulation case study, in which we were able to induce an infarct-related reentrant VT using different modelling configurations for the BZ. We validated our results by reproducing with a reasonable accuracy the patient's ECG during VT, as well as in sinus rhythm from the endocardial activation maps invasively recorded via electroanatomical mapping systems in this latter case. This allowed us to find the location and analyse the features of the CC responsible for the clinical VT. Importantly, such in-silico EP study might have been conducted prior to the RFA procedure, since our approach is completely based on non-invasive clinical data acquired before the real intervention. These results confirm the feasibility of performing useful pre-procedural personalized in-silico EP studies, as well as the potential of the proposed approach to become a helpful tool for RFA planning in cases of infarct-related reentrant VTs in the future. Nevertheless, the developed methodology requires further improvements and validation by means of simulation studies including large cohorts of patients.During the carrying out of this doctoral thesis, the author Alejandro Daniel López Pérez was financially supported by the Ministerio de Economía, Industria y Competitividad of Spain through the program Ayudas para contratos predoctorales para la formación de doctores, with the grant number BES-2013-064089.López Pérez, AD. (2019). Computational modelling of the human heart and multiscale simulation of its electrophysiological activity aimed at the treatment of cardiac arrhythmias related to ischaemia and Infarction [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/124973TESI

    Skeletonization and segmentation of binary voxel shapes

    Get PDF
    Preface. This dissertation is the result of research that I conducted between January 2005 and December 2008 in the Visualization research group of the Technische Universiteit Eindhoven. I am pleased to have the opportunity to thank a number of people that made this work possible. I owe my sincere gratitude to Alexandru Telea, my supervisor and first promotor. I did not consider pursuing a PhD until my Master’s project, which he also supervised. Due to our pleasant collaboration from which I learned quite a lot, I became convinced that becoming a doctoral student would be the right thing to do for me. Indeed, I can say it has greatly increased my knowledge and professional skills. Alex, thank you for our interesting discussions and the freedom you gave me in conducting my research. You made these four years a pleasant experience. I am further grateful to Jack vanWijk, my second promotor. Our monthly discussions were insightful, and he continuously encouraged me to take a more formal and scientific stance. I would also like to thank Prof. Jan de Graaf from the department of mathematics for our discussions on some of my conjectures. His mathematical rigor was inspiring. I am greatly indebted to the Netherlands Organisation for Scientific Research (NWO) for funding my PhD project (grant number 612.065.414). I thank Prof. Kaleem Siddiqi, Prof. Mark de Berg, and Dr. Remco Veltkamp for taking part in the core doctoral committee and Prof. Deborah Silver and Prof. Jos Roerdink for participating in the extended committee. Our Visualization group provides a great atmosphere to do research in. In particular, I would like to thank my fellow doctoral students Frank van Ham, Hannes Pretorius, Lucian Voinea, Danny Holten, Koray Duhbaci, Yedendra Shrinivasan, Jing Li, NielsWillems, and Romain Bourqui. They enabled me to take my mind of research from time to time, by discussing political and economical affairs, and more trivial topics. Furthermore, I would like to thank the senior researchers of our group, Huub van de Wetering, Kees Huizing, and Michel Westenberg. In particular, I thank Andrei Jalba for our fruitful collaboration in the last part of my work. On a personal level, I would like to thank my parents and sister for their love and support over the years, my friends for providing distractions outside of the office, and Michelle for her unconditional love and ability to light up my mood when needed

    Detection and elimination of rock face vegetation from terrestrial LIDAR data using the virtual articulating conical probe algorithm

    Get PDF
    A common use of terrestrial lidar is to conduct studies involving change detection of natural or engineered surfaces. Change detection involves many technical steps beyond the initial data acquisition: data structuring, registration, and elimination of data artifacts such as parallax errors, near-field obstructions, and vegetation. Of these, vegetation detection and elimination with terrestrial lidar scanning (TLS) presents a completely different set of issues when compared to vegetation elimination from aerial lidar scanning (ALS). With ALS, the ground footprint of the lidar laser beam is very large, and the data acquisition hardware supports multi-return waveforms. Also, the underlying surface topography is relatively smooth compared to the overlying vegetation which has a high spatial frequency. On the other hand, with most TLS systems, the width of the lidar laser beam is very small, and the data acquisition hardware supports only first-return signals. For the case where vegetation is covering a rock face, the underlying rock surface is not smooth because rock joints and sharp block edges have a high spatial frequency very similar to the overlying vegetation. Traditional ALS approaches to eliminate vegetation take advantage of the contrast in spatial frequency between the underlying ground surface and the overlying vegetation. When the ALS approach is used on vegetated rock faces, the algorithm, as expected, eliminates the vegetation, but also digitally erodes the sharp corners of the underlying rock. A new method that analyzes the slope of a surface along with relative depth and contiguity information is proposed as a way of differentiating high spatial frequency vegetative cover from similar high spatial frequency rock surfaces. This method, named the Virtual Articulating Conical Probe (VACP) algorithm, offers a solution for detection and elimination of rock face vegetation from TLS point cloud data while not affecting the geometry of the underlying rock surface. Such a tool could prove invaluable to the geotechnical engineer for quantifying rates of vertical-face rock loss that impact civil infrastructure safety --Abstract, page iii

    Enrichment of a 3D building model with windows using oblique-view ALS and façade textures

    Get PDF
    A wide range of applications using 3D building models exists; such as computer games, city marketing, disaster management, tourist information systems, simulations of noise propagation and surveillance of sustainable construction. Complete acquisition of large urban scenes has become feasible using multi-aspect-oblique-view ALS; however, automated generation of detailed 3D models, the main focus of this thesis, still poses a significant challenge. \ud \ud To enable enrichment of a 3D building model with windows, the 3D wire-frame building model and the ALS point cloud are first automatically co-registered. The novel approach to window extraction presented in this thesis exploits evidences about window positions in processed oblique-view ALS point cloud and façade image textures. Laser beam penetrates glassy window areas, and thus points found behind a segmented façade plane, projected onto the façade plane, give reliable evidence about intrusion positions. On the other hand, high values of gradient on a texture are usually due to window frames. These two facts are exploited when extracting initial window patches. Additionally, binary masks, obtained by region growing of homogeneous parts of façade textures, are used to eliminate certain façade artefacts and to improve shape of window patches. The assumption, that many windows of the same kind are on the same floor, is used for the refinement procedure. First, façade textures are divided into horizontal blocks, representing floors. Second, a search for non-similar window patch templates within each block is performed. Third, to obtain additional window patch positions, the chosen templates are cross-correlated along the respective block. \ud \ud Eleven façade planes of an existing 3D wire-frame building model are textured with extracted patches, representing windows and other intrusions. Despite different arrangements of windows, varying window sizes, and relatively strict evaluation method, the method results in 63% detection rate. What is more, the method is mostly data-driven and the detection rate outperforms the method using only oblique-view ALS (Tuttas & Stilla, 2013). The windows are well defined, since the basis for most of window patches are connected components of edges belonging to window frames

    Heritage documentation techniques and methods

    Get PDF
    This methodology notebooks "Heritage documentation techniques and methods", contains • 3D modelling, digital photography and information dissemination • Creation of 3D models by using scanners • Low-cost desktop scanner • Photography notes: Exposure • Photography notes: Focal length, lenses and cross-polarization • White adjustment and colour calibration • Image-Based Modelling Systems • Focus stacking technique • Rollout photography and DStretch filter • Information dissemination • 3D diagram blocks • Simple animations of 3D modelsEsta serie de cuadernos tiene como objetivo difundir un conjunto de técnicas usadas principalmente para la construcción y documentación de modelos tridimensionales (3D) y fotografía de alta resolución de objetos arqueológicos. Estas técnicas posibilitan construir modelos con calidad métrica contrastada, color calibrado y alta resolución que se difunden por internet usando diversas plataformas.This series of notebooks aims to describe a set of techniques used mainly to construct and document the three-dimensional (3D) models and high-resolution photographs of archaeological objects. These techniques can be used to build models with a contrasting metric quality, calibrated colour and high resolution, to be disseminated on the Internet using various platforms and web services.Parte de la realización de estos cuadernos ha sido financiada a través del proyecto GR18028 (Grupo de investigación RNM026) el cual ha sido cofinanciado por los Fondos Europeos de Desarrollo Regional (FEDER) y el Gobierno de Extremadura

    Computational processing and analysis of ear images

    Get PDF
    Tese de mestrado. Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 201
    corecore