1,014 research outputs found

    3D Reconstruction of Indoor Corridor Models Using Single Imagery and Video Sequences

    Get PDF
    In recent years, 3D indoor modeling has gained more attention due to its role in decision-making process of maintaining the status and managing the security of building indoor spaces. In this thesis, the problem of continuous indoor corridor space modeling has been tackled through two approaches. The first approach develops a modeling method based on middle-level perceptual organization. The second approach develops a visual Simultaneous Localisation and Mapping (SLAM) system with model-based loop closure. In the first approach, the image space was searched for a corridor layout that can be converted into a geometrically accurate 3D model. Manhattan rule assumption was adopted, and indoor corridor layout hypotheses were generated through a random rule-based intersection of image physical line segments and virtual rays of orthogonal vanishing points. Volumetric reasoning, correspondences to physical edges, orientation map and geometric context of an image are all considered for scoring layout hypotheses. This approach provides physically plausible solutions while facing objects or occlusions in a corridor scene. In the second approach, Layout SLAM is introduced. Layout SLAM performs camera localization while maps layout corners and normal point features in 3D space. Here, a new feature matching cost function was proposed considering both local and global context information. In addition, a rotation compensation variable makes Layout SLAM robust against cameras orientation errors accumulations. Moreover, layout model matching of keyframes insures accurate loop closures that prevent miss-association of newly visited landmarks to previously visited scene parts. The comparison of generated single image-based 3D models to ground truth models showed that average ratio differences in widths, heights and lengths were 1.8%, 3.7% and 19.2% respectively. Moreover, Layout SLAM performed with the maximum absolute trajectory error of 2.4m in position and 8.2 degree in orientation for approximately 318m path on RAWSEEDS data set. Loop closing was strongly performed for Layout SLAM and provided 3D indoor corridor layouts with less than 1.05m displacement errors in length and less than 20cm in width and height for approximately 315m path on York University data set. The proposed methods can successfully generate 3D indoor corridor models compared to their major counterpart

    3D VISUAL TRACKING USING A SINGLE CAMERA

    Get PDF
    automated surveillance and motion based recognition. 3D tracking address the localization of moving target is the 3D space. Therefore, 3D tracking requires 3D measurement of the moving object which cannot be obtained from 2D cameras. Existing 3D tracking systems use multiple cameras for computing the depth of field and it is only used in research laboratories. Millions of surveillance cameras are installed worldwide and all of them capture 2D images. Therefore, 3D tracking cannot be performed with these cameras unless multiple cameras are installed at each location in order to compute the depth. This means installing millions of new cameras which is not a feasible solution. This work introduces a novel depth estimation method from a single 2D image using triangulation. This method computes the absolute depth of field for any object in the scene with high accuracy and short computational time. The developed method is used for performing 3D visual tracking using a single camera by providing the depth of field and ground coordinates of the moving object for each frame accurately and efficiently. Therefore, this technique can help in transforming existing 2D tracking and 2D video analytics into 3D without incurring additional costs. This makes video surveillance more efficient and increases its usage in human life. The proposed methodology uses background subtraction process for detecting a moving object in the image. Then, the newly developed depth estimation method is used for computing the 3D measurement of the moving target. Finally, the unscented Kalman filter is used for tracking the moving object given the 3D measurement obtained by the triangulation method. This system has been test and validated using several video sequences and it shows good performance in term of accuracy and computational complexity

    Advanced Integration of GNSS and External Sensors for Autonomous Mobility Applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    EXPEDITIONARY LOGISTICS: A LOW-COST, DEPLOYABLE, UNMANNED AERIAL SYSTEM FOR AIRFIELD DAMAGE ASSESSMENT

    Get PDF
    Airfield Damage Repair (ADR) is among the most important expeditionary activities for our military. The goal of ADR is to restore a damaged airfield to operational status as quickly as possible. Before the process of ADR can begin, however, the damage to the airfield needs to be assessed. As a result, Airfield Damage Assessment (ADA) has received considerable attention. Often in a damaged airfield, there is an expectation of unexploded ordnance, which makes ADA a slow, difficult, and dangerous process. For this reason, it is best to make ADA completely unmanned and automated. Additionally, ADA needs to be executed as quickly as possible so that ADR can begin and the airfield restored to a usable condition. Among other modalities, tower-based monitoring and remote sensor systems are often used for ADA. There is now an opportunity to investigate the use of commercial-off-the-shelf, low-cost, automated sensor systems for automatic damage detection. By developing a combination of ground-based and Unmanned Aerial Vehicle sensor systems, we demonstrate the completion of ADA in a safe, efficient, and cost-effective manner.http://archive.org/details/expeditionarylog1094561346Outstanding ThesisLieutenant, United States NavyApproved for public release; distribution is unlimited

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Robust navigation for industrial service robots

    Get PDF
    Pla de Doctorats Industrials de la Generalitat de CatalunyaRobust, reliable and safe navigation is one of the fundamental problems of robotics. Throughout the present thesis, we tackle the problem of navigation for robotic industrial mobile-bases. We identify its components and analyze their respective challenges in order to address them. The research work presented here ultimately aims at improving the overall quality of the navigation stack of a commercially available industrial mobile-base. To introduce and survey the overall problem we first break down the navigation framework into clearly identified smaller problems. We examine the Simultaneous Localization and Mapping (SLAM) problem, recalling its mathematical grounding and exploring the state of the art. We then review the problem of planning the trajectory of a mobile-base toward a desired goal in the generated environment representation. Finally we investigate and clarify the use of the subset of the Lie theory that is useful in robotics. The first problem tackled is the recognition of place for closing loops in SLAM. Loop closure refers to the ability of a robot to recognize a previously visited location and infer geometrical information between its current and past locations. Using only a 2D laser range finder sensor, we address the problem using a technique borrowed from the field of Natural Language Processing (NLP) which has been successfully applied to image-based place recognition, namely the Bag-of-Words. We further improve the method with two proposals inspired from NLP. Firstly, the comparison of places is strengthened by considering the natural relative order of features in each individual sensor reading. Secondly, topological correspondences between places in a corpus of visited places are established in order to promote together instances that are ‘close’ to one another. We then tackle the problem of motion model calibration for odometry estimation. Given a mobile-base embedding an exteroceptive sensor able to observe ego-motion, we propose a novel formulation for estimating the intrinsic parameters of an odometry motion model. Resorting to an adaptation of the pre-integration theory initially developed for inertial motion sensors, we employ iterative nonlinear on-manifold optimization to estimate the wheel radii and wheel separation. The method is further extended to jointly estimate both the intrinsic parameters of the odometry model together with the extrinsic parameters of the embedded sensor. The method is shown to accommodate to variation in model parameters quickly when the vehicle is subject to physical changes during operation. Following the generation of a map in which the robot is localized, we address the problem of estimating trajectories for motion planning. We devise a new method for estimating a sequence of robot poses forming a smooth trajectory. Regardless of the Lie group considered, the trajectory is seen as a collection of states lying on a spline with non-vanishing n-th derivatives at each point. Formulated as a multi-objective nonlinear optimization problem, it allows for the addition of cost functions such as velocity and acceleration limits, collision avoidance and more. The proposed method is evaluated for two different motion planning tasks, the planning of trajectories for a mobile-base evolving in the SE(2) manifold, and the planning of the motion of a multi-link robotic arm whose end-effector evolves in the SE(3) manifold. From our study of Lie theory, we developed a new, ready to use, programming library called `manif’. The library is open source, publicly available and is developed following good software programming practices. It is designed so that it is easy to integrate and manipulate, and allows for flexible use while facilitating the possibility to extend it beyond the already implemented Lie groups.La navegación autónoma es uno de los problemas fundamentales de la robótica, y sus diferentes desafíos se han estudiado durante décadas. El desarrollo de métodos de navegación robusta, confiable y segura es un factor clave para la creación de funcionalidades de nivel superior en robots diseñados para operar en entornos con humanos. A lo largo de la presente tesis, abordamos el problema de navegación para bases robóticas móviles industriales; identificamos los elementos de un sistema de navegación; y analizamos y tratamos sus desafíos. El trabajo de investigación presentado aquí tiene como último objetivo mejorar la calidad general del sistema completo de navegación de una base móvil industrial disponible comercialmente. Para estudiar el problema de navegación, primero lo desglosamos en problemas menores claramente identificados. Examinamos el subproblema de mapeo del entorno y localización del robot simultáneamente (SLAM por sus siglas en ingles) y estudiamos el estado del arte del mismo. Al hacerlo, recordamos y detallamos la base matemática del problema de SLAM. Luego revisamos el subproblema de planificación de trayectorias hacia una meta deseada en la representación del entorno generada. Además, como una herramienta para las soluciones que se presentarán más adelante en el desarrollo de la tesis, investigamos y aclaramos el uso de teoría de Lie, centrándonos en el subconjunto de la teoría que es útil para la estimación de estados en robótica. Como primer elemento identificado para mejoras, abordamos el problema de reconocimiento de lugares para cerrar lazos en SLAM. El cierre de lazos se refiere a la capacidad de un robot para reconocer una ubicación visitada previamente e inferí información geométrica entre la ubicación actual del robot y aquellas reconocidas. Usando solo un sensor láser 2D, la tarea es desafiante ya que la percepción del entorno que proporciona el sensor es escasa y limitada. Abordamos el problema utilizando 'bolsas de palabras', una técnica prestada del campo de procesamiento del lenguaje natural (NLP) que se ha aplicado con éxito anteriormente al reconocimiento de lugares basado en imágenes. Nuestro método incluye dos nuevas propuestas inspiradas también en NLP. Primero, la comparación entre lugares candidatos se fortalece teniendo en cuenta el orden relativo natural de las características en cada lectura individual del sensor; y segundo, se establece un corpus de lugares visitados para promover juntos instancias que están "cerca" la una de la otra desde un punto de vista topológico. Evaluamos nuestras propuestas por separado y conjuntamente en varios conjuntos de datos, con y sin ruido, demostrando mejora en la detección de cierres de lazo para sensores láser 2D, con respecto al estado del arte. Luego abordamos el problema de la calibración del modelo de movimiento para la estimación de la edometría. Dado que nuestra base móvil incluye un sensor exteroceptivo capaz de observar el movimiento de la plataforma, proponemos una nueva formulación que permite estimar los parámetros intrínsecos del modelo cinemático de la plataforma durante el cómputo de la edometría del vehículo. Hemos recurrido a una adaptación de la teoría de reintegración inicialmente desarrollado para unidades inerciales de medida, y aplicado la técnica a nuestro modelo cinemático. El método nos permite, mediante optimización iterativa no lineal, la estimación del valor del radio de las ruedas de forma independiente y de la separación entre las mismas. El método se amplía posteriormente par idéntica de forma simultánea, estos parámetros intrínsecos junto con los parámetros extrínsecos que ubican el sensor láser con respecto al sistema de referencia de la base móvil. El método se valida en simulación y en un entorno real y se muestra que converge hacia los verdaderos valores de los parámetros. El método permite la adaptación de los parámetros intrínsecos del modelo cinemático de la plataforma derivados de cambios físicos durante la operación, tales como el impacto que el cambio de carga sobre la plataforma tiene sobre el diámetro de las ruedas. Como tercer subproblema de navegación, abordamos el reto de planificar trayectorias de movimiento de forma suave. Desarrollamos un método para planificar la trayectoria como una secuencia de configuraciones sobre una spline con n-ésimas derivadas en todos los puntos, independientemente del grupo de Lie considerado. Al ser formulado como un problema de optimización no lineal con múltiples objetivos, es posible agregar funciones de coste al problema de optimización que permitan añadir límites de velocidad o aceleración, evasión de colisiones, etc. El método propuesto es evaluado en dos tareas de planificación de movimiento diferentes, la planificación de trayectorias para una base móvil que evoluciona en la variedad SE(2), y la planificación del movimiento de un brazo robótico cuyo efector final evoluciona en la variedad SE(3). Además, cada tarea se evalúa en escenarios con complejidad de forma incremental, y se muestra un rendimiento comparable o mejor que el estado del arte mientras produce resultados más consistentes. Desde nuestro estudio de la teoría de Lie, desarrollamos una nueva biblioteca de programación llamada “manif”. La biblioteca es de código abierto, está disponible públicamente y se desarrolla siguiendo las buenas prácticas de programación de software. Esta diseñado para que sea fácil de integrar y manipular, y permite flexibilidad de uso mientras se facilita la posibilidad de extenderla más allá de los grupos de Lie inicialmente implementados. Además, la biblioteca se muestra eficiente en comparación con otras soluciones existentes. Por fin, llegamos a la conclusión del estudio de doctorado. Examinamos el trabajo de investigación y trazamos líneas para futuras investigaciones. También echamos un vistazo en los últimos años y compartimos una visión personal y experiencia del desarrollo de un doctorado industrial.Postprint (published version

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Room layout estimation on mobile devices

    Get PDF
    Room layout generation is the problem of generating a drawing or a digital model of an existing room from a set of measurements such as laser data or images. The generation of floor plans can find application in the building industry to assess the quality and the correctness of an ongoing construction w.r.t. the initial model, or to quickly sketch the renovation of an apartment. Real estate industry can rely on automatic generation of floor plans to ease the process of checking the livable surface and to propose virtual visits to prospective customers. As for the general public, the room layout can be integrated into mixed reality games to provide a better immersiveness experience, or used in other related augmented reality applications such room redecoration. The goal of this industrial thesis (CIFRE) is to investigate and take advantage of the state-of-the art mobile devices in order to automate the process of generating room layouts. Nowadays, modern mobile devices usually come a wide range of sensors, such as inertial motion unit (IMU), RGB cameras and, more recently, depth cameras. Moreover, tactile touchscreens offer a natural and simple way to interact with the user, thus favoring the development of interactive applications, in which the user can be part of the processing loop. This work aims at exploiting the richness of such devices to address the room layout generation problem. The thesis has three major contributions. We first show how the classic problem of detecting vanishing points in an image can benefit from an a-priori given by the IMU sensor. We propose a simple and effective algorithm for detecting vanishing points relying on the gravity vector estimated by the IMU. A new public dataset containing images and the relevant IMU data is introduced to help assessing vanishing point algorithms and foster further studies in the field. As a second contribution, we explored the state of-the-art of real-time localization and map optimization algorithms for RGB-D sensors. Real-time localization is a fundamental task to enable augmented reality applications, and thus it is a critical component when designing interactive applications. We propose an evaluation of existing algorithms for the common desktop set-up in order to be employed on a mobile device. For each considered method, we assess the accuracy of the localization as well as the computational performances when ported on a mobile device. Finally, we present a proof of concept of application able to generate the room layout relying on a Project Tango tablet equipped with an RGB-D sensor. In particular, we propose an algorithm that incrementally processes and fuses the 3D data provided by the sensor in order to obtain the layout of the room. We show how our algorithm can rely on the user interactions in order to correct the generated 3D model during the acquisition process
    corecore