10 research outputs found

    Door and window detection in 3D point cloud of indoor scenes.

    Get PDF
    This paper proposes a 3D-2D-3D algorithm for doors and windows detection in 3D indoor environment of point cloud data. Firstly, by setting up a virtual camera in the middle of this 3D environment, a set of pictures are taken from different angles by rotating the camera, so that corresponding 2D images can be generated. Next, these images are used to detect and identify the positions of doors and windows in the space. To obtain point cloud data containing the doors and windows position information, the 2D information are then mapped back to the origin 3D point cloud environment. Finally, by processing the contour lines and crossing points, the features of doors and windows through the position information are optimized. The experimental results show that this "global-local" approach is efficient when detecting and identifying the location of doors and windows in 3D point cloud environment

    Door Detection in 3D Colored Laser Scans for Autonomous Indoor Navigation

    Get PDF

    Deep learning model for doors detection a contribution for context awareness recognition of patients with Parkinson’s disease

    Get PDF
    Freezing of gait (FoG) is one of the most disabling motor symptoms in Parkinson’s disease, which is described as a symptom where walking is interrupted by a brief, episodic absence, or marked reduction, of forward progression despite the intention to continue walking. Although FoG causes are multifaceted, they often occur in response of environment triggers, as turnings and passing through narrow spaces such as a doorway. This symptom appears to be overcome using external sensory cues. The recognition of such environments has consequently become a pertinent issue for PD-affected community. This study aimed to implement a real-time DL-based door detection model to be integrated into a wearable biofeedback device for delivering on-demand proprioceptive cues. It was used transfer-learning concepts to train a MobileNet-SSD in TF environment. The model was then integrated in a RPi being converted to a faster and lighter computing power model using TensorFlow Lite settings. Model performance showed a considerable precision of 97,2%, recall of 78,9% and a good F1-score of 0,869. In real-time testing with the wearable device, DL-model showed to be temporally efficient (~2.87 fps) to detect with accuracy doors over real-life scenarios. Future work will include the integration of sensory cues with the developed model in the wearable biofeedback device aiming to validate the final solution with end-users

    Learning from demonstration for locally assistive mobility aids

    Full text link
    © 2019, The Author(s). Active assistive systems for mobility aids are largely restricted to environments mapped a-priori, while passive assistance primarily provides collision mitigation and other hand-crafted behaviors in the platform’s immediate space. This paper presents a framework providing active short-term assistance, combining the freedom of location independence with the intelligence of active assistance. Demonstration data consisting of on-board sensor data and driving inputs is gathered from an able-bodied expert maneuvring the mobility aid around a generic interior setting, and used in constructing a probabilistic intention model built with Radial Basis Function Networks. This allows for short-term intention prediction relying only upon immediately available user input and on-board sensor data, to be coupled with real-time path generation based upon the same expert demonstration data via Dynamic Policy Programming, a stochastic optimal control method. Together these two elements provide a combined assistive mobility system, capable of operating in restrictive environments without the need for additional obstacle avoidance protocols. Experimental results in both simulation and on the University of Technology Sydney semi-autonomous wheelchair in settings not seen in training data show promise in assisting users of power mobility aids

    A Doorway Detection and Direction (3Ds) System for Social Robots via a Monocular Camera

    Get PDF
    In this paper, we propose a novel algorithm to detect a door and its orientation in indoor settings from the view of a social robot equipped with only a monocular camera. The challenge is to achieve this goal with only a 2D image from a monocular camera. The proposed system is designed through the integration of several modules, each of which serves a special purpose. The detection of the door is addressed by training a convolutional neural network (CNN) model on a new dataset for Social Robot Indoor Navigation (SRIN). The direction of the door (from the robot’s observation) is achieved by three other modules: Depth module, Pixel-Selection module, and Pixel2Angle module, respectively. We include simulation results and real-time experiments to demonstrate the performance of the algorithm. The outcome of this study could be beneficial in any robotic navigation system for indoor environments

    3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments

    Get PDF
    Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy S™ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tango™ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices
    corecore