220 research outputs found

    Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor

    Get PDF
    Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased

    Ambient awareness on a sidewalk for visually impaired

    Get PDF
    Safe navigation by avoiding obstacles is vital for visually impaired while walking on a sidewalk. There are both static and dynamic obstacles to avoid. Detection, monitoring, and estimating the threat posed by obstacles remain challenging. Also, it is imperative that the design of the system must be energy efficient and low cost. An additional challenge in designing an interactive system capable of providing useful feedback is to minimize users\u27 cognitive load. We started the development of the prototype system through classifying obstacles and providing feedback. To overcome the limitations of the classification-based system, we adopted the image annotation framework in describing the scene, which may or may not include the obstacles. Both solutions partially solved the safe navigation but were found to be ineffective in providing meaningful feedback and issues with the diurnal cycle. To address such limitations, we introduce the notion of free-path and threat level imposed by the static or dynamic obstacles. This solution reduced the overhead of obstacle detection and helped in designing meaningful feedback. Affording users a natural conversation through an interactive dialog enabled interface was found to promote safer navigation. In this dissertation, we modeled the free-path and threat level using a reinforcement learning (RL) framework.We built the RL model in the Gazebo robot simulation environment and implanted that in a handheld device. A natural conversation model was created using data collected through a Wizard of OZ approach. The RL model and conversational agent model together resulted in the handheld assistive device called Augmented Guiding Torch (AGT). The AGT provides improved mobility over white cane by providing ambient awareness through natural conversation. It can inform the visually impaired about the obstacles which are helpful to be warned about ahead of time, e.g., construction site, scooter, crowd, car, bike, or big hole. Using the RL framework, the robot avoided over 95% obstacles. The visually impaired avoided over 85% obstacles with the help of AGT on a 500 feet U-shape sidewalk. Findings of this dissertation support the effectiveness of augmented guiding through RL for navigation and obstacle avoidance of visually impaired users

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Body-relative navigation guidance using uncalibrated cameras

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 89-97) and index.The ability to navigate through the world is an essential capability to humans. In a variety of situations, people do not have the time, the opportunity or the capability to learn the layout of the environment before visiting an area. Examples include soldiers in the field entering an unknown building, firefighters responding to an emergency, or a visually impaired person walking through the city. In absence of external source of localization (such as GPS), the system must rely on internal sensing to provide navigation guidance to the user. In order to address real-world situations, the method must provide spatially extended, temporally consistent navigation guidance, through cluttered and dynamic environments. While recent research has largely focused on metric methods based on calibrated cameras, the work presented in this thesis demonstrates a novel approach to navigation using uncalibrated cameras. During the first visit of the environment, the method builds a topological representation of the user's exploration path, which we refer to as the place graph. The method then provides navigation guidance from any place to any other in the explored environment. On one hand, a localization algorithm determines the location of the user in the graph. On the other hand, a rotation guidance algorithm provides a directional cue towards the next graph node in the user's body frame. Our method makes little assumption about the environment except that it contains descriptive visual features. It requires no intrinsic or extrinsic camera calibration, and relies instead on a method that learns the correlation between user rotation and feature correspondence across cameras. We validate our approach using several ground truth datasets. In addition, we show that our approach is capable of guiding a robot equipped with a local obstacle avoidance capability through real, cluttered environments. Finally, we validate our system with nine untrained users through several kilometers of indoor environments.by Olivier Koch.Ph.D

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Contributions to autonomous robust navigation of mobile robots in industrial applications

    Get PDF
    151 p.Un aspecto en el que las plataformas móviles actuales se quedan atrás en comparación con el punto que se ha alcanzado ya en la industria es la precisión. La cuarta revolución industrial trajo consigo la implantación de maquinaria en la mayor parte de procesos industriales, y una fortaleza de estos es su repetitividad. Los robots móviles autónomos, que son los que ofrecen una mayor flexibilidad, carecen de esta capacidad, principalmente debido al ruido inherente a las lecturas ofrecidas por los sensores y al dinamismo existente en la mayoría de entornos. Por este motivo, gran parte de este trabajo se centra en cuantificar el error cometido por los principales métodos de mapeado y localización de robots móviles,ofreciendo distintas alternativas para la mejora del posicionamiento.Asimismo, las principales fuentes de información con las que los robots móviles son capaces de realizarlas funciones descritas son los sensores exteroceptivos, los cuales miden el entorno y no tanto el estado del propio robot. Por esta misma razón, algunos métodos son muy dependientes del escenario en el que se han desarrollado, y no obtienen los mismos resultados cuando este varía. La mayoría de plataformas móviles generan un mapa que representa el entorno que les rodea, y fundamentan en este muchos de sus cálculos para realizar acciones como navegar. Dicha generación es un proceso que requiere de intervención humana en la mayoría de casos y que tiene una gran repercusión en el posterior funcionamiento del robot. En la última parte del presente trabajo, se propone un método que pretende optimizar este paso para así generar un modelo más rico del entorno sin requerir de tiempo adicional para ello

    Recognition of Traffic Situations based on Conceptual Graphs

    Get PDF
    This work investigates the suitability of conceptual graphs for situation recognition. The scene graph is created in the form of a conceptual graph according to the concept type hierarchy, relation type hierarchy, rules and constraints using the previously obtained information about objects and lanes. The graphs are then matched using projection with the query conceptual graph, which represents the situation. The functionality of the model is shown on the real traffic situations
    corecore