3,310 research outputs found

    Using Hierarchical EM to Extract Planes from 3D Range Scans

    Get PDF
    ©2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2005 IEEE International Conference on Robotics and Automation (ICRA), 18-22 April 2005, Barcelona, Spain.DOI: 10.1109/ROBOT.2005.1570803Recently, the acquisition of three-dimensional maps has become more and more popular. This is motivated by the fact that robots act in the three-dimensional world and several tasks such as path planning or localizing objects can be carried out more reliable using three-dimensional representations. In this paper we consider the problem of extracting planes from three-dimensional range data. In contrast to previous approaches our algorithm uses a hierarchical variant of the popular Expectation Maximization (EM) algorithm [1] to simultaneously learn the main directions of the planar structures. These main directions are then used to correct the position and orientation of planes. In practical experiments carried out with real data and in simulations we demonstrate that our algorithm can accurately extract planes and their orientation from range data

    Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis

    Full text link
    We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution -- but complete -- output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data

    Automatic Reconstruction of Parametric, Volumetric Building Models from 3D Point Clouds

    Get PDF
    Planning, construction, modification, and analysis of buildings requires means of representing a building's physical structure and related semantics in a meaningful way. With the rise of novel technologies and increasing requirements in the architecture, engineering and construction (AEC) domain, two general concepts for representing buildings have gained particular attention in recent years. First, the concept of Building Information Modeling (BIM) is increasingly used as a modern means for representing and managing a building's as-planned state digitally, including not only a geometric model but also various additional semantic properties. Second, point cloud measurements are now widely used for capturing a building's as-built condition by means of laser scanning techniques. A particular challenge and topic of current research are methods for combining the strengths of both point cloud measurements and Building Information Modeling concepts to quickly obtain accurate building models from measured data. In this thesis, we present our recent approaches to tackle the intermeshed challenges of automated indoor point cloud interpretation using targeted segmentation methods, and the automatic reconstruction of high-level, parametric and volumetric building models as the basis for further usage in BIM scenarios. In contrast to most reconstruction methods available at the time, we fundamentally base our approaches on BIM principles and standards, and overcome critical limitations of previous approaches in order to reconstruct globally plausible, volumetric, and parametric models.Automatische Rekonstruktion von parametrischen, volumetrischen Gebäudemodellen aus 3D Punktwolken Für die Planung, Konstruktion, Modifikation und Analyse von Gebäuden werden Möglichkeiten zur sinnvollen Repräsentation der physischen Gebäudestruktur sowie dazugehöriger Semantik benötigt. Mit dem Aufkommen neuer Technologien und steigenden Anforderungen im Bereich von Architecture, Engineering and Construction (AEC) haben zwei Konzepte für die Repräsentation von Gebäuden in den letzten Jahren besondere Aufmerksamkeit erlangt. Erstens wird das Konzept des Building Information Modeling (BIM) zunehmend als ein modernes Mittel zur digitalen Abbildung und Verwaltung "As-Planned"-Zustands von Gebäuden verwendet, welches nicht nur ein geometrisches Modell sondern auch verschiedene zusätzliche semantische Eigenschaften beinhaltet. Zweitens werden Punktwolkenmessungen inzwischen häufig zur Aufnahme des "As-Built"-Zustands mittels Laser-Scan-Techniken eingesetzt. Eine besondere Herausforderung und Thema aktueller Forschung ist die Entwicklung von Methoden zur Vereinigung der Stärken von Punktwolken und Konzepten des Building Information Modeling um schnell akkurate Gebäudemodelle aus den gemessenen Daten zu erzeugen. In dieser Dissertation präsentieren wir unsere aktuellen Ansätze um die miteinander verwobenen Herausforderungen anzugehen, Punktwolken mithilfe geeigneter Segmentierungsmethoden automatisiert zu interpretieren, sowie hochwertige, parametrische und volumetrische Gebäudemodelle als Basis für die Verwendung im BIM-Umfeld zu rekonstruieren. Im Gegensatz zu den meisten derzeit verfügbaren Rekonstruktionsverfahren basieren unsere Ansätze grundlegend auf Prinzipien und Standards aus dem BIM-Umfeld und überwinden kritische Einschränkungen bisheriger Ansätze um vollständig plausible, volumetrische und parametrische Modelle zu erzeugen.</p

    Point Cloud Registration for LiDAR and Photogrammetric Data: a Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms

    Full text link
    Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.Comment: 7 figure

    ADNet : diagnóstico assistido por computador para doença de Alzheimer usando rede neural convolucional 3D com cérebro inteiro

    Get PDF
    Orientadores: Anderson de Rezende Rocha, Marina WeilerDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Demência por doença de Alzheimer (DA) é uma síndrome clínica caracterizada por múltiplos problemas cognitivos, incluindo dificuldades na memória, funções executivas, linguagem e habilidades visuoespaciais. Sendo a forma mais comum de demência, essa doença mata mais do que câncer de mama e de próstata combinados, além de ser a sexta principal causa de morte nos Estados Unidos. A neuroimagem é uma das áreas de pesquisa mais promissoras para a detecção de biomarcadores estruturais da DA, onde uma técnica não invasiva é usada para capturar uma imagem digital do cérebro, a partir da qual especialistas extraem padrões e características da doença. Nesse contexto, os sistemas de diagnóstico assistido por computador (DAC) são abordagens que visam ajudar médicos e especialistas na interpretação de dados médicos, para fornecer diagnósticos aos pacientes. Em particular, redes neurais convolucionais (RNCs) são um tipo especial de rede neural artificial (RNA), que foram inspiradas em como o sistema visual funciona e, nesse sentido, têm sido cada vez mais utilizadas em tarefas de visão computacional, alcançando resultados impressionantes. Em nossa pesquisa, um dos principais objetivos foi utilizar o que há de mais avançado sobre aprendizagem profunda (por exemplo, RNC) para resolver o difícil problema de identificar biomarcadores estruturais da DA em imagem por ressonância magnética (IRM), considerando três grupos diferentes, ou seja, cognitivamente normal (CN), comprometimento cognitivo leve (CCL) e DA. Adaptamos redes convolucionais com dados fornecidos principalmente pela ADNI e avaliamos no desafio CADDementia, resultando em um cenário mais próximo das condições no mundo real, em que um sistema DAC é usado em um conjunto de dados diferente daquele usado no treinamento. Os principais desafios e contribuições da nossa pesquisa incluem a criação de um sistema de aprendizagem profunda que seja totalmente automático e comparativamente rápido, ao mesmo tempo em que apresenta resultados competitivos, sem usar qualquer conhecimento específico de domínio. Nomeamos nossa melhor arquitetura ADNet (Alzheimer's Disease Network) e nosso melhor método ADNet-DA (ADNet com adaptação de domínio), o qual superou a maioria das submissões no CADDementia, todas utilizando conhecimento prévio da doença, como regiões de interesse específicas do cérebro. A principal razão para não usar qualquer informação da doença em nosso sistema é fazer com que ele aprenda e extraia padrões relevantes de regiões importantes do cérebro automaticamente, que podem ser usados para apoiar os padrões atuais de diagnóstico e podem inclusive auxiliar em novas descobertas para diferentes ou novas doenças. Após explorar uma série de técnicas de visualização para interpretação de modelos, associada à inteligência artificial explicável (XAI), acreditamos que nosso método possa realmente ser empregado na prática médica. Ao diagnosticar pacientes, é possível que especialistas usem a ADNet para gerar uma diversidade de visualizações explicativas para uma determinada imagem, conforme ilustrado em nossa pesquisa, enquanto a ADNet-DA pode ajudar com o diagnóstico. Desta forma, os especialistas podem chegar a uma decisão mais informada e em menos tempoAbstract: Dementia by Alzheimer's disease (AD) is a clinical syndrome characterized by multiple cognitive problems, including difficulties in memory, executive functions, language and visuospatial skills. Being the most common form of dementia, this disease kills more than breast cancer and prostate cancer combined, and it is the sixth leading cause of death in the United States. Neuroimaging is one of the most promising areas of research for early detection of AD structural biomarkers, where a non-invasive technique is used to capture a digital image of the brain, from which specialists extract patterns and features of the disease. In this context, computer-aided diagnosis (CAD) systems are approaches that aim at assisting doctors and specialists in interpretation of medical data to provide diagnoses for patients. In particular, convolutional neural networks (CNNs) are a special kind of artificial neural network (ANN), which were inspired by how the visual system works, and, in this sense, have been increasingly used in computer vision tasks, achieving impressive results. In our research, one of the main goals was bringing to bear what is most advanced in deep learning research (e.g., CNN) to solve the difficult problem of identifying AD structural biomarkers in magnetic resonance imaging (MRI), considering three different groups, namely, cognitively normal (CN), mild cognitive impairment (MCI), and AD. We tailored convolutional networks with data primarily provided by ADNI, and evaluated them on the CADDementia challenge, thus resulting in a scenario very close to the real-world conditions, in which a CAD system is used on a dataset differently from the one used for training. The main challenges and contributions of our research include devising a deep learning system that is both completely automatic and comparatively fast, while also presenting competitive results, without using any domain specific knowledge. We named our best architecture ADNet (Alzheimer's Disease Network), and our best method ADNet-DA (ADNet with domain adaption), which outperformed most of the CADDementia submissions, all of them using prior knowledge from the disease, such as specific regions of interest of the brain. The main reason for not using any information from the disease in our system is to make it automatically learn and extract relevant patterns from important regions of the brain, which can be used to support current diagnosis standards, and may even assist in new discoveries for different or new diseases. After exploring a number of visualization techniques for model interpretability, associated with explainable artificial intelligence (XAI), we believe that our method can be actually employed in medical practice. While diagnosing patients, it is possible for specialists to use ADNet to generate a diversity of explanatory visualizations for a given image, as illustrated in our research, while ADNet-DA can assist with the diagnosis. This way, specialists can come up with a more informed decision and in less timeMestradoCiência da ComputaçãoMestre em Ciência da Computaçã

    Graph-based segmentation of range data with applications to 3D urban mapping

    Get PDF
    This paper presents an efficient graph-based algorithm for the segmentation of planar regions out of 3D range maps of urban areas. Segmentation of planar surfaces in urban scenarios is challenging because the data acquired is typically sparsely sampled, incomplete, and noisy. The algorithm is motivated by Felzenszwalb’s algorithm to 2D image segmentation [8], and is extended to deal with non-uniformly sampled 3D range data using an approximate nearest neighbor search. Interpoint distances are sorted in increasing order and this list of distances is traversed growing planar regions that satisfy both local and global variation of distance and curvature. The algorithm runs in O(n log n) and compares favorably with other region growing mechanisms based on Expectation Maximization. Experiments carried out with real data acquired in an outdoor urban environment demonstrate that our approach is well-suited to segment planar surfaces from noisy 3D range data. A pair of applications of the segmented results are shown, a) to derive traversability maps, and b) to calibrate a camera network.Peer ReviewedPostprint (published version

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201
    corecore