33 research outputs found

    Simple Monocular door detection and tracking

    Get PDF
    International audienceWhen considering an indoor navigation without using any prior knowledge of the environment, relevant landmark extraction remains an open issue for robot localization and navigation. In this paper, we consider indoor navigation along corridors. In such environments, when considering monocular cameras, doors can be seen as important landmarks. In this context, we present a new framework for door detection and tracking which exploits geometrical features of corridors. Since real-time properties are required for navigation purposes, designing solutions with a low computational complexity remains a relevant issue. The proposed algorithm relies on visual features such as lines and vanishing points that are further combined to discriminate the floor and wall planes and then to recognize doors within the image sequences. Detected doors are used to initialize a dedicated edge-based 2D door tracker. Experiments show that the framework is able to detect 82\% of doors on our dataset while respecting real time constraints

    Deep learning model for doors detection a contribution for context awareness recognition of patients with Parkinson’s disease

    Get PDF
    Freezing of gait (FoG) is one of the most disabling motor symptoms in Parkinson’s disease, which is described as a symptom where walking is interrupted by a brief, episodic absence, or marked reduction, of forward progression despite the intention to continue walking. Although FoG causes are multifaceted, they often occur in response of environment triggers, as turnings and passing through narrow spaces such as a doorway. This symptom appears to be overcome using external sensory cues. The recognition of such environments has consequently become a pertinent issue for PD-affected community. This study aimed to implement a real-time DL-based door detection model to be integrated into a wearable biofeedback device for delivering on-demand proprioceptive cues. It was used transfer-learning concepts to train a MobileNet-SSD in TF environment. The model was then integrated in a RPi being converted to a faster and lighter computing power model using TensorFlow Lite settings. Model performance showed a considerable precision of 97,2%, recall of 78,9% and a good F1-score of 0,869. In real-time testing with the wearable device, DL-model showed to be temporally efficient (~2.87 fps) to detect with accuracy doors over real-life scenarios. Future work will include the integration of sensory cues with the developed model in the wearable biofeedback device aiming to validate the final solution with end-users

    Simulation of a mobile robot navigation system

    Get PDF
    Mobile robots are used in various application areas including manufacturing, mining, military operations, search and rescue missions and so on. As such there is a need to model robot mobility that tracks robot system modules such as navigation system and visi on based object recognition. For the navigation system it is important to locate the position of the robot in surr ounding environment. Then it has to plan a path towards desired destination. The navigation system of a robot has to identify all potential obstacles in order to find a suitable path. The objective of this research is to develop a simulation system to identify difficulties facing mobile robot navigation in industrial environments, and then tackle these problems effectively. The simulation makes use of information provided by various sensors including vision, range, and force sensors. With the help of battery operated mobile robots it is possible to move objects around in any industry/manufacturing plant and thus minimize environmental impact due to carbon emissions and pollution. The use of such robots in industry also makes it safe to deal with hazardous materials. In industry, a mobile robot deals with many tools and equipment and therefore it has to detect and recognize these objects and then track them. In this paper, the object detection and recognition is based on vision sensors and then image processing techniques. Techniques cove red include Speeded Up Ro bust Features (SURF), template matching, and colour segmentation. If the robot detects the target in its view, it will track the target and then grasp it. However, if the object is not in the current view, the robot continues its search to find it. To make the mobile robot move in its environment, a number of basic path planning strategies have been used. In the navigation system, the robot navigates to the nearest wall (or similar obstacle) and then moves along that obstacle. If an obstacle is detected by the robot using the built-in ultrasonic range sensor, the robot will navigate around that obstacle and then continue moving along it. While the robot is self-navigating in its environment, it continues to look for the target. The robot used in this work robot is scalable for industrial applications in mining, search and rescue missions, and so on. This robot is environmentally friendly and does not produce carbon emissions. In this paper the simulation of path planning algorithm for an autonomous robot is presented. Results of modelling the robot in a real-world industrial environment for testing the robot’s navigation are also discussed

    Probabilistic Joint State Estimation of Robot and Non-static Objects for Mobile Manipulation

    Full text link

    Real-time 2D–3D door detection and state classification on a low-power device

    Get PDF
    In this paper, we propose three methods for door state classifcation with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work ofine, in low-powered computers as the Jetson Nano, in real-time with the ability to diferentiate between open, closed and semi-open doors. We use the 3D object classifcation, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classifcation networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classifcation algorithm running in real-time on a low-power device.info:eu-repo/semantics/publishedVersio

    Segmentation of Floors in Corridor Images for Mobile Robot Navigation

    Get PDF
    This thesis presents a novel method of floor segmentation from a single image for mobile robot navigation. In contrast with previous approaches that rely upon homographies, our approach does not require multiple images (either stereo or optical flow). It also does not require the camera to be calibrated, even for lens distortion. The technique combines three visual cues for evaluating the likelihood of horizontal intensity edge line segments belonging to the wall-floor boundary. The combination of these cues yields a robust system that works even in the presence of severe specular reflections, which are common in indoor environments. The nearly real-time algorithm is tested on a large database of images collected in a wide variety of conditions, on which it achieves nearly 90% segmentation accuracy. Additionally, we apply the floor segmentation method to low-resolution images and propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that detection of wall-floor boundaries can be estimated even in texture-poor environments with images as small as 16x12. One of the advantages of working at such resolutions is that the algorithm operates at hundreds of frames per second, or equivalently requires only a small percentage of the CPU

    Human-Like room segmentation for domestic cleaning robots

    Get PDF
    Fleer DR. Human-Like room segmentation for domestic cleaning robots. Robotics. 2017;6(4): 35.Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challen
    corecore