20 research outputs found

    Improving 3D Scan Matching Time of the Coarse Binary Cubes Method with Fast Spatial Subsampling

    Get PDF
    Morales, J.; Martinez, J.L.; Mandow, A.; Reina, A.J.; Seron, J.; Garcia-Cerezo, A., "Improving 3D scan matching time of the coarse binary cubes method with fast spatial subsampling," 39th Annual Conference of the IEEE Industrial Electronics Society, pp. 4168-4173, 2013 doi:10.1109/IECON.2013.6699804Exploiting the huge amount of real time range data provided by new multi-beam three-dimensional (3D) laser scanners is challenging for vehicle and mobile robot applications. The Coarse Binary Cube (CBC) method was proposed to achieve fast and accurate scene registration by maximizing the number of coincident cubes between a pair of scans. The aim of this paper is speeding up CBC with a fast spatial subsampling strategy for raw point clouds that employs the same type of efficient data structures as CBC. Experimental results have been obtained with the Velodyne HDL-32E sensor mounted on the Quadriga mobile robot on irregular terrain. The influence of the subsampling rate has been analyzed. Preliminary results show a relevant gain in computation time without losing matching accuracy.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Intelligent surveillance of indoor environments based on computer vision and 3D point cloud fusion

    Get PDF
    A real-time detection algorithm for intelligent surveillance is presented. The system, based on 3D change detection with respect to a complex scene model, allows intruder monitoring and detection of added and missing objects, under different illumination conditions. The proposed system has two independent stages. First, a mapping application provides an accurate 3D wide model of the scene, using a view registration approach. This registration is based on computer vision and 3D point cloud. Fusion of visual features with 3D descriptors is used in order to identify corresponding points in two consecutive views. The matching of these two views is first estimated by a pre-alignment stage, based on the tilt movement of the sensor, later they are accurately aligned by an Iterative Closest Point variant (Levenberg-Marquardt ICP), which performance has been improved by a previous filter based on geometrical assumptions. The second stage provides accurate intruder and object detection by means of a 3D change detection approach, based on Octree volumetric representation, followed by a clusters analysis. The whole scene is continuously scanned, and every captured is compared with the corresponding part of the wide model thanks to the previous analysis of the sensor movement parameters. With this purpose a tilt-axis calibration method has been developed. Tests performed show the reliable performance of the system under real conditions and the improvements provided by each stage independently. Moreover, the main goal of this application has been enhanced, for reliable intruder detection by the tilting of the sensors using its built-in motor to increase the size of the monitored area. (C) 2015 Elsevier Ltd. All rights reserved.This work was supported by the Spanish Government through the CICYT projects (TRA2013-48314-C3-1-R) and (TRA2011-29454-C03-02)

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Proceedings, MSVSCC 2015

    Get PDF
    The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their students’ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This year’s conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this year’s conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters. Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair John ShullGraduate Student, MSVE Capstone Conference Student Chai

    Building models from multiple point sets with kernel density estimation

    Get PDF
    One of the fundamental problems in computer vision is point set registration. Point set registration finds use in many important applications and in particular can be considered one of the crucial stages involved in the reconstruction of models of physical objects and environments from depth sensor data. The problem of globally aligning multiple point sets, representing spatial shape measurements from varying sensor viewpoints, into a common frame of reference is a complex task that is imperative due to the large number of critical functions that accurate and reliable model reconstructions contribute to. In this thesis we focus on improving the quality and feasibility of model and environment reconstruction through the enhancement of multi-view point set registration techniques. The thesis makes the following contributions: First, we demonstrate that employing kernel density estimation to reason about the unknown generating surfaces that range sensors measure allows us to express measurement variability, uncertainty and also to separate the problems of model design and viewpoint alignment optimisation. Our surface estimates define novel view alignment objective functions that inform the registration process. Our surfaces can be estimated from point clouds in a datadriven fashion. Through experiments on a variety of datasets we demonstrate that we have developed a novel and effective solution to the simultaneous multi-view registration problem. We then focus on constructing a distributed computation framework capable of solving generic high-throughput computational problems. We present a novel task-farming model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and subsequently solving computationally distributable problems that benefit from both independent and dependent distributed components and a level of communication between process elements. We demonstrate that this framework is a novel schema for parallel computer vision algorithms and evaluate the performance to establish computational gains over serial implementations. We couple this framework with an accurate computation-time prediction model to contribute a novel structure appropriate for addressing expensive real-world algorithms with substantial parallel performance and predictable time savings. Finally, we focus on a timely instance of the multi-view registration problem: modern range sensors provide large numbers of viewpoint samples that result in an abundance of depth data information. The ability to utilise this abundance of depth data in a feasible and principled fashion is of importance to many emerging application areas making use of spatial information. We develop novel methodology for the registration of depth measurements acquired from many viewpoints capturing physical object surfaces. By defining registration and alignment quality metrics based on our density estimation framework we construct an optimisation methodology that implicitly considers all viewpoints simultaneously. We use a non-parametric data-driven approach to consider varying object complexity and guide large view-set spatial transform optimisations. By aligning large numbers of partial, arbitrary-pose views we evaluate this strategy quantitatively on large view-set range sensor data where we find that we can improve registration accuracy over existing methods and contribute increased registration robustness to the magnitude of coarse seed alignment. This allows large-scale registration on problem instances exhibiting varying object complexity with the added advantage of massive parallel efficiency

    Análisis y reconstrucción 3D de entornos complejos mediante múltiples sensores

    Get PDF
    Actualmente, vivimos en una sociedad en un constante avance y empeño por un desarrollo continuo de la seguridad. El progreso de los sistemas de seguridad está ligado al desarrollo de la raza humana como especie, para proporcionar una situación más cómoda y tranquila para las personas. Los sistemas tradicionales de seguridad, en los que un usuario se encarga de la monitorización de varias pantallas, presentan algunos errores o limitaciones debidos al factor humano, que con el desarrollo de la investigación se han conseguido erradicar, con nuevos sistemas de vigilancia inteligente. Estos nuevos sistemas son más efectivos, ya que automatizan las tareas de vigilancia y no dependen del rendimiento del personal encargado de la monitorización. Estos nuevos sistemas, han ido incorporando nuevos tipos de sensores, además de las tradicionales cámaras a color, que presentan una fuerte dependencia de la iluminación. Para eliminar este problema se han empezado a utilizar sensores como los 2.5D que permiten capturar tanto datos de color como de profundidad, eso hace que la robustez de cualquier sistema de vigilancia basado en estos sensores sea más alta que cualquier sistema de vigilancia inteligente basado en cámaras. Una ventaja de estos sensores, es su bajo coste, ya que se tratan de sensores 2.5D y no sensores 3D que pueden dar una información más precisa, pero viendo los resultados de muchos sistemas basados en los primeros, podemos decir que la implementación de los mismos es un completo acierto. Un problema que comparten tanto sensores 2.5 D como las cámaras de vigilancia, es el campo de visión de los mismos, creando una necesidad de ampliación de la zona de vigilancia en algunas situaciones. En la presente memoria se propone un sistema de vigilancia basado en datos tridimensionales capturados desde diferentes sensores Microsoft Kinect para Windows. El avance de este proyecto respecto a los sistemas tradicionales de seguridad, es la falta de dependencia a diferentes elementos, como pueden ser los datos de color, y en consecuencia la iluminación, a parte del antes mencionado operador humano. Nuestro proyecto consistirá en la adquisición datos tridimensionales desde los diferentes sensores, colocando previamente una diana geométrica que nos servirá como referencia a la hora de crear un mapeado completo con nuevos datos tridimensionales. Lo crearemos a partir del cálculo de una matriz de transformación la cual será hallada a partir del centro de la diana geométrica que hayamos colocado en la escena. Esta matriz multiplicará una de las nubes de puntos tomadas desde uno de los sensores, transformándola, para después sumarla a la otra nube, adquiriendo así el mapeado completo. Este mapeado nos servirá para comparar con nuevos datos tridimensionales tomados en tiempo real, de esta forma podremos ver las diferencias existentes entre cómo debería ser la situación en la escena y como es, detectando así los posibles intrusos. En las diferentes pruebas a las que se ha sometido al prototipo los resultados han sido exitosos, incluso forzando el sistema a trabajar en condiciones adversas, probando así la robustez del sistema. Y pensando desde un principio en aumentar la sencillez para el usuario, creando una aplicación versátil.Today, we live in a society in constant progress and commitment for continued development of security. The progress of the safety systems is linked to the development of the human race as a species, to provide a more comfortable and quiet for people situation. Traditional security systems, in which a user is responsible for monitoring multiple screens, have some limitations due to the human factor, but new intelligent surveillance systems have been created for eradicate this limitation. The intelligent surveillance systems are more effective. These systems automate surveillance tasks and not dependent on the performance of personnel responsible for monitoring. These new systems have been incorporating new types of sensors, in addition to traditional color cameras, which have a strong dependence of lighting. To eliminate this problem, the new systems have started using sensors such as 2.5D that capture color data and depth, that makes these systems safer than any system based surveillance cameras. One advantage of these sensors is their low cost, and we can say that the implementation of these is a complete success. A problem shared by both 2.5 D sensors and surveillance cameras, is the field of view, because in specific situation you need more than one camera or sensor. In this document we propose a surveillance system based on three-dimensional data captured from different sensors Microsoft Kinect for Windows. The progress of this project over traditional security systems is the lack of dependence on different elements, such as color data, and consequently lighting, besides the aforementioned human operator. Our project will consist of three-dimensional data acquisition from different sensors, previously placing a geometric target that will serve as a reference when creating a full three-dimensional mapping with new data. We will create from the calculation of a transformation matrix which will be calculated with the geometric centers of the target we have placed at the scene in different positions. The point clouds taken from one of the sensors will be multiply by the transformation matrix, transforming it, then add it to the other cloud, thereby acquiring the complete mapping. This mapping will help us to compare with new three-dimensional real-time data collected in this way we can see the differences between how the situation should be on the scene as it is, thereby detecting potential intruders. In the various tests that has been subjected to prototype, the results have been successful, even forcing the system to work in adverse conditions, proving the robustness of the system. And thinking from the beginning to increase the simplicity for the user, creating a versatile application.Ingeniería Electrónica Industrial y Automátic

    Intelligent instrumentation techniques to improve the traces information-volume ratio

    Get PDF
    With ever more powerful machines being constantly deployed, it is crucial to manage the computational resources efficiently. This is important both from the point of view of the individual user, who expects fast results; and the supercomputing center hosting the whole infrastructure, that is interested in maximizing its overall productivity. Nevertheless, the real sustained performance achieved by the applications can be significantly lower than the theoretical peak performance of the machines. A key factor to bridge this performance gap is to understand how parallel computers behave. Performance analysis tools are essential not only to understand the behavior of parallel applications, but to identify why performance expectations might not have been met, serving as guidelines to improve the inefficiencies that caused poor performance, and driving both software and hardware optimizations. However, detailed analysis of the behavior of a parallel application requires to process a large amount of data that also grows extremely fast. Current large scale systems already comprise hundreds of thousands of cores, and upcoming exascale systems are expected to assemble more than a million processing elements. With such number of hardware components, the traditional analysis methodologies consisting in blindly collecting as much data as possible and then performing exhaustive lookups are no longer applicable, because the volume of performance data generated becomes absolutely unmanageable to store, process and analyze. The evolution of the tools suggests that more complex approaches are needed, incorporating intelligence to perform competently the challenging and important task of detailed analysis. In this thesis, we address the problem of scalability of performance analysis tools in large scale systems. In such scenarios, in-depth understanding of the interactions between all the system components is more compelling than ever for an effective use of the parallel resources. To this end, our work includes a thorough review of techniques that have been successfully applied to aid in the task of Big Data Analytics in fields like machine learning, data mining, signal processing and computer vision. We have leveraged these techniques to improve the analysis of large-scale parallel applications by automatically uncovering repetitive patterns, finding data correlations, detecting performance trends and further useful analysis information. Combinining their use, we have minimized the volume of performance data captured from an execution, while maximizing the benefit and insight gained from this data, and have proposed new and more effective methodologies for single and multi-experiment performance analysis.Con el incesante aumento de potencia y capacidad de los superordenadores, la habilidad de emplear de forma efectiva todos los recursos disponibles se ha convertido en un factor crucial. La necesidad de un uso eficiente radica tanto en la aspiración de los usuarios por obtener resultados en el menor tiempo posible, como en el interés del propio centro de cálculo que alberga la infraestructura computacional por maximizar la productividad de los recursos. Sin embargo, el rendimiento real que las aplicaciones son capaces de alcanzar suele ser significativamente menor que el rendimiento teórico de las máquinas. Y la clave para salvar esta distancia consiste en comprender el comportamiento de las máquinas paralelas. Las herramientas de análisis de rendimiento son instrumentos fundamentales no solo para entender como funcionan las aplicaciones paralelas, sino también para identificar los problemas por los que el rendimiento obtenido dista del esperado, sirviendo como guías para mejorar aquellas deficiencias software y/o hardware que son causas de degradación. No obstante, un análisis en detalle del comportamiento de una aplicación paralela requiere procesar una gran cantidad de datos que crece extremadamente rápido. Los sistemas actuales de gran escala ya comprenden cientos de miles de procesadores, y se espera que los inminentes sistemas exa-escala reunan millones de elementos de procesamiento. Con semejante número de componentes, las estrategias tradicionales de obtención indiscriminada de datos para mejorar la precisión de las herramientas de análisis caerán en desuso debido a las dificultades que entraña almacenarlos y procesarlos. En este aspecto, la evolución de las herramientas sugiere que son necesarios métodos más sofisticados, que incorporen inteligencia para desarrollar la tarea de análisis de manera más competente. Esta tesis aborda el problema de escalabilidad de las herramientas de análisis en sistemas de gran escala, donde es primordial el conocimiento detallado de las interacciones entre todos los componentes para emplear los recursos paralelos de la forma más óptima. Con este fin, esta investigación incluye una revisión exhaustiva de las técnicas que se han aplicado satisfactoriamente para extraer información de grandes volumenes de datos en otras áreas como aprendizaje automático, minería de datos y procesado de señal. Hemos adaptado estas técnicas para mejorar el análisis de aplicaciones paralelas de gran escala, detectando automáticamente patrones repetitivos, correlaciones de datos, tendencias de rendimiento, y demás información relevante. Combinando el uso de estas técnicas, se ha conseguido disminuir el volumen de datos generado durante una ejecución, a la vez que aumentar la cantidad de información útil que se puede extraer de los datos mediante la aplicación de nuevas y más efectivas metodologías de análisis para el estudio del rendimiento de experimentos individuales o en seri

    Case Studies on Optimizing Algorithms for GPU Architectures

    Get PDF
    Modern GPUs are complex, massively multi-threaded, and high-performance. Programmers naturally gravitate towards taking advantage of this high performance for achieving faster results. However, in order to do so successfully, programmers must first understand and then master a new set of skills – writing parallel code, using different types of parallelism, adapting to GPU architectural features, and understanding issues that limit performance. In order to ease this learning process and help GPU programmers become productive more quickly, this dissertation introduces three data access skeletons (DASks) – Block, Column, and Row -- and two block access skeletons (BASks) – Block-By-Block and Warp-by-Warp. Each “skeleton” provides a high-performance implementation framework that partitions data arrays into data blocks and then iterates over those blocks. The programmer must still write “body” methods on individual data blocks to solve their specific problem. These skeletons provide efficient machine dependent data access patterns for use on GPUs. DASks group n data elements into m fixed size data blocks. These m data block are then partitioned across p thread blocks using a 1D or 2D layout pattern. The fixed-size data blocks are parameterized using three C++ template parameters – nWork, WarpSize, and nWarps. Generic programming techniques use these three parameters to enable performance experiments on three different types of parallelism – instruction-level parallelism (ILP), data-level parallelism (DLP), and thread-level parallelism (TLP). These different DASks and BASks are introduced using a simple memory I/O (Copy) case study. A nearest neighbor search case study resulted in the development of DASKs and BASks but does not use these skeletons itself. Three additional case studies – Reduce/Scan, Histogram, and Radix Sort -- demonstrate DASks and BASks in action on parallel primitives and also provides more valuable performance lessons.Doctor of Philosoph
    corecore