205 research outputs found

    Algorithms for the reconstruction, analysis, repairing and enhancement of 3D urban models from multiple data sources

    Get PDF
    Over the last few years, there has been a notorious growth in the field of digitization of 3D buildings and urban environments. The substantial improvement of both scanning hardware and reconstruction algorithms has led to the development of representations of buildings and cities that can be remotely transmitted and inspected in real-time. Among the applications that implement these technologies are several GPS navigators and virtual globes such as Google Earth or the tools provided by the Institut Cartogràfic i Geològic de Catalunya. In particular, in this thesis, we conceptualize cities as a collection of individual buildings. Hence, we focus on the individual processing of one structure at a time, rather than on the larger-scale processing of urban environments. Nowadays, there is a wide diversity of digitization technologies, and the choice of the appropriate one is key for each particular application. Roughly, these techniques can be grouped around three main families: - Time-of-flight (terrestrial and aerial LiDAR). - Photogrammetry (street-level, satellite, and aerial imagery). - Human-edited vector data (cadastre and other map sources). Each of these has its advantages in terms of covered area, data quality, economic cost, and processing effort. Plane and car-mounted LiDAR devices are optimal for sweeping huge areas, but acquiring and calibrating such devices is not a trivial task. Moreover, the capturing process is done by scan lines, which need to be registered using GPS and inertial data. As an alternative, terrestrial LiDAR devices are more accessible but cover smaller areas, and their sampling strategy usually produces massive point clouds with over-represented plain regions. A more inexpensive option is street-level imagery. A dense set of images captured with a commodity camera can be fed to state-of-the-art multi-view stereo algorithms to produce realistic-enough reconstructions. One other advantage of this approach is capturing high-quality color data, whereas the geometric information is usually lacking. In this thesis, we analyze in-depth some of the shortcomings of these data-acquisition methods and propose new ways to overcome them. Mainly, we focus on the technologies that allow high-quality digitization of individual buildings. These are terrestrial LiDAR for geometric information and street-level imagery for color information. Our main goal is the processing and completion of detailed 3D urban representations. For this, we will work with multiple data sources and combine them when possible to produce models that can be inspected in real-time. Our research has focused on the following contributions: - Effective and feature-preserving simplification of massive point clouds. - Developing normal estimation algorithms explicitly designed for LiDAR data. - Low-stretch panoramic representation for point clouds. - Semantic analysis of street-level imagery for improved multi-view stereo reconstruction. - Color improvement through heuristic techniques and the registration of LiDAR and imagery data. - Efficient and faithful visualization of massive point clouds using image-based techniques.Durant els darrers anys, hi ha hagut un creixement notori en el camp de la digitalització d'edificis en 3D i entorns urbans. La millora substancial tant del maquinari d'escaneig com dels algorismes de reconstrucció ha portat al desenvolupament de representacions d'edificis i ciutats que es poden transmetre i inspeccionar remotament en temps real. Entre les aplicacions que implementen aquestes tecnologies hi ha diversos navegadors GPS i globus virtuals com Google Earth o les eines proporcionades per l'Institut Cartogràfic i Geològic de Catalunya. En particular, en aquesta tesi, conceptualitzem les ciutats com una col·lecció d'edificis individuals. Per tant, ens centrem en el processament individual d'una estructura a la vegada, en lloc del processament a gran escala d'entorns urbans. Avui en dia, hi ha una àmplia diversitat de tecnologies de digitalització i la selecció de l'adequada és clau per a cada aplicació particular. Aproximadament, aquestes tècniques es poden agrupar en tres famílies principals: - Temps de vol (LiDAR terrestre i aeri). - Fotogrametria (imatges a escala de carrer, de satèl·lit i aèries). - Dades vectorials editades per humans (cadastre i altres fonts de mapes). Cadascun d'ells presenta els seus avantatges en termes d'àrea coberta, qualitat de les dades, cost econòmic i esforç de processament. Els dispositius LiDAR muntats en avió i en cotxe són òptims per escombrar àrees enormes, però adquirir i calibrar aquests dispositius no és una tasca trivial. A més, el procés de captura es realitza mitjançant línies d'escaneig, que cal registrar mitjançant GPS i dades inercials. Com a alternativa, els dispositius terrestres de LiDAR són més accessibles, però cobreixen àrees més petites, i la seva estratègia de mostreig sol produir núvols de punts massius amb regions planes sobrerepresentades. Una opció més barata són les imatges a escala de carrer. Es pot fer servir un conjunt dens d'imatges capturades amb una càmera de qualitat mitjana per obtenir reconstruccions prou realistes mitjançant algorismes estèreo d'última generació per produir. Un altre avantatge d'aquest mètode és la captura de dades de color d'alta qualitat. Tanmateix, la informació geomètrica resultant sol ser de baixa qualitat. En aquesta tesi, analitzem en profunditat algunes de les mancances d'aquests mètodes d'adquisició de dades i proposem noves maneres de superar-les. Principalment, ens centrem en les tecnologies que permeten una digitalització d'alta qualitat d'edificis individuals. Es tracta de LiDAR terrestre per obtenir informació geomètrica i imatges a escala de carrer per obtenir informació sobre colors. El nostre objectiu principal és el processament i la millora de representacions urbanes 3D amb molt detall. Per a això, treballarem amb diverses fonts de dades i les combinarem quan sigui possible per produir models que es puguin inspeccionar en temps real. La nostra investigació s'ha centrat en les següents contribucions: - Simplificació eficaç de núvols de punts massius, preservant detalls d'alta resolució. - Desenvolupament d'algoritmes d'estimació normal dissenyats explícitament per a dades LiDAR. - Representació panoràmica de baixa distorsió per a núvols de punts. - Anàlisi semàntica d'imatges a escala de carrer per millorar la reconstrucció estèreo de façanes. - Millora del color mitjançant tècniques heurístiques i el registre de dades LiDAR i imatge. - Visualització eficient i fidel de núvols de punts massius mitjançant tècniques basades en imatges

    Automated Vascular Smooth Muscle Segmentation, Reconstruction, Classification and Simulation on Whole-Slide Histology

    Get PDF
    Histology of the microvasculature depicts detailed characteristics relevant to tissue perfusion. One important histologic feature is the smooth muscle component of the microvessel wall, which is responsible for controlling vessel caliber. Abnormalities can cause disease and organ failure, as seen in hypertensive retinopathy, diabetic ischemia, Alzheimer’s disease and improper cardiovascular development. However, assessments of smooth muscle cell content are conventionally performed on selected fields of view on 2D sections, which may lead to measurement bias. We have developed a software platform for automated (1) 3D vascular reconstruction, (2) detection and segmentation of muscularized microvessels, (3) classification of vascular subtypes, and (4) simulation of function through blood flow modeling. Vessels were stained for α-actin using 3,3\u27-Diaminobenzidine, assessing both normal (n=9 mice) and regenerated vasculature (n=5 at day 14, n=4 at day 28). 2D locally adaptive segmentation involved vessel detection, skeletonization, and fragment connection. 3D reconstruction was performed using our novel nucleus landmark-based registration. Arterioles and venules were categorized using supervised machine learning based on texture and morphometry. Simulation of blood flow for the normal and regenerated vasculature was performed at baseline and during demand based on the structural measures obtained from the above tools. Vessel medial area and vessel wall thickness were found to be greater in the normal vasculature as compared to the regenerated vasculature (p\u3c0.001) and a higher density of arterioles was found in the regenerated tissue (p\u3c0.05). Validation showed: a Dice coefficient of 0.88 (compared to manual) for the segmentations, a 3D reconstruction target registration error of 4 μm, and area under the receiver operator curve of 0.89 for vessel classification. We found 89% and 67% decreases in the blood flow through the network for the regenerated vasculature during increased oxygen demand as compared to the normal vasculature, respectively for 14 and 28 days post-ischemia. We developed a software platform for automated vasculature histology analysis involving 3D reconstruction, segmentation, and arteriole vs. venule classification. This advanced the knowledge of conventional histology sampling compared to whole slide analysis, the morphological and density differences in the regenerated vasculature, and the effect of the differences on blood flow and function

    Aiding the conservation of two wooden Buddhist sculptures with 3D imaging and spectroscopic techniques

    Get PDF
    The conservation of Buddhist sculptures that were transferred to Europe at some point during their lifetime raises numerous questions: while these objects historically served a religious, devotional purpose, many of them currently belong to museums or private collections, where they are detached from their original context and often adapted to western taste. A scientific study was carried out to address questions from Museo d'Arte Orientale of Turin curators in terms of whether these artifacts might be forgeries or replicas, and how they may have transformed over time. Several analytical techniques were used for materials identification and to study the production technique, ultimately aiming to discriminate the original materials from those added within later interventions

    Digital Techniques for Documenting and Preserving Cultural Heritage

    Get PDF
    In this unique collection the authors present a wide range of interdisciplinary methods to study, document, and conserve material cultural heritage. The methods used serve as exemplars of best practice with a wide variety of cultural heritage objects having been recorded, examined, and visualised. The objects range in date, scale, materials, and state of preservation and so pose different research questions and challenges for digitization, conservation, and ontological representation of knowledge. Heritage science and specialist digital technologies are presented in a way approachable to non-scientists, while a separate technical section provides details of methods and techniques, alongside examples of notable applications of spatial and spectral documentation of material cultural heritage, with selected literature and identification of future research. This book is an outcome of interdisciplinary research and debates conducted by the participants of the COST Action TD1201, Colour and Space in Cultural Heritage, 2012–16 and is an Open Access publication available under a CC BY-NC-ND licence.https://scholarworks.wmich.edu/mip_arc_cdh/1000/thumbnail.jp

    Digital Techniques for Documenting and Preserving Cultural Heritage

    Get PDF
    This book presents interdisciplinary approaches to the examination and documentation of material cultural heritage, using non-invasive spatial and spectral optical technologies

    3D Quantification and Description of the Developing Zebrafish Cranial Vasculature

    Get PDF
    Background: Zebrafish are an excellent model to study cardiovascular development and disease. Transgenic reporter lines and state-of-the-art microscopy allow 3D visualization of the vasculature in vivo. Previous studies relied on subjective visual interpretation of vascular topology without objective quantification. Thus, there is the need to develop analysis approaches that model and quantify the zebrafish vasculature to understand the effect of development, genetic manipulation or drug treatment. Aim: To establish an image analysis pipeline to extract quantitative 3D parameters describing the shape and topology of the zebrafish vasculature, and examine how these are impacted during development, disease, and by chemicals. Methods: Experiments were performed in zebrafish embryos, conforming with UK Home Office regulations. Image acquisition of transgenic zebrafish was performed using a Z.1 Zeiss light-sheet fluorescence microscope. Pre-processing, enhancement, registration, segmentation, and quantification methods were developed and optimised using open-source software, Fiji (Fiji 1.51p; National Institutes of Health, Bethesda, USA). Results: Motion correction was successfully applied using Scale Invariant Feature Transform (SIFT), and vascular enhancement based on vessel tubularity (Sato filter) exceeded general filter outcomes. Following evaluation and optimisation of a variety of segmentation methods, intensity-based segmentation (Otsu thresholding) was found to deliver the most reliable segmentation, allowing 3D vascular volume measurement. Following successful segmentation of the cerebral vasculature, a workflow to quantify left-right intra-sample symmetry was developed, finding no difference from 2-to-5dpf. Next, the first vascular inter-sample registration using a manual landmark-based approach was developed and it was found that conjugate direction search allowed automatic inter-sample registration. This enabled extraction of age-specific regions of similarity and variability between different individual embryos from 2-to-5dpf. A workflow was developed to quantify vascular network length, branching points, diameter, and complexity, showing reductions in zebrafish without blood flow. Also, I discovered and characterised a previously undescribed endothelial cell membrane behaviour termed kugeln. Conclusion: A workflow that successfully extracts the zebrafish vasculature and enables detailed quantification of a wide variety of vascular parameters was developed
    corecore