128 research outputs found

    An Experimental Distributed Framework for Distributed Simultaneous Localization and Mapping

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is widely used in applications such as rescue, navigation, semantic mapping, augmented reality and home entertainment applications. Most of these applications would do better if multiple devices are used in a distributed setting. The distributed SLAM research would benefit if there is a framework where the complexities of network communication is already handled. In this paper we introduce such framework utilizing open source Robot Operating System (ROS) and VirtualBox virtualization software. Furthermore, we describe a way to measure communication statistics of the distributed SLAM system

    Visual SLAM using straight lines

    Get PDF
    The present thesis is focuses on the problem of Simultaneous Localisation and Mapping (SLAM) using only visual data (VSLAM). This means to concurrently estimate the position of a moving camera and to create a consistent map of the environment. Since implementing a whole VSLAM system is out of the scope of a degree thesis, the main aim is to improve an existing visual SLAM system by complementing the commonly used point features with straight line primitives. This enables more accurate localization in environments with few feature points, like corridors. As a foundation for the project, ScaViSLAM by Strasdat et al. is used, which is a state-of-the-art real-time visual SLAM framework. Since it currently only supports Stereo and RGB-D systems, implementing a Monocular approach will be researched as well as an integration of it as a ROS package in order to deploy it on a mobile robot. For the experimental results, the Care-O-bot service robot developed by Fraunhofer IPA will be used

    DELIBOT WITH SLAM IMPLEMENTATION

    Get PDF
    This paper describes and discusses a research work on "DeliBOT – A Mobile Robot with Implementation of SLAM utilizing Computer Vision/Machine Learning Techniques". The principle objective is to study about the utilization of Kinect in mobile robotics and use it to assemble an integrated system framework equipped for building a map of environment, and localizing mobile robot with respect to the map using visual cues. There were four principle work stages. The initial step was studying and testing solutions for mapping and navigation with a RGB-D sensor, the Kinect. The accompanying stage was implementing a system framework equipped for identifying and localizing objects from the point cloud given by the Kinect, permitting the execution of further errands on the system framework, i.e. considering the computational load. The third step was identifying the landmarks and the improvement they can present in the framework. At last, the joining of the previous modules was led and experimental evaluation and validation of the integrated system. The demand of substitution of human by a robot is winding up noticeably more probable eager these days because of the likelihood of less mistakes that the robot apparently makes. Amid the previous couple of years, the technology turn out to be more accurate and legitimate outcomes with less errors, and researches started to consolidate more sensors. By utilizing accessible sensors, robot will perceive and identify environment it is in and makes map. Additionally, robot will have element of itself locating inside environment. Robot fundamental operations are identification of objects and localization for conduction of the services. Robot conduct appropriate path planning and avoidance of object by setting a target or determining goal [1]. Because of the outstanding research and robotics applications in almost every segments of life of human's, from space surveillance to health-care, solution is created for autonomous mobile robots direct tasks excluding human intervention in indoor environment [2], a few applications like cleaning facilities and transportation fields. Robot navigation in environment that is safe that performs profoundly, require environment map. Since in the greater part of applications in real-life map is not given, exploration algorithm is used

    Localization and Mapping from Shore Contours and Depth

    Get PDF
    This work examines the problem of solving SLAM in aquatic environments using an unmanned surface vessel under conditions that restrict global knowledge of the robot's pose. These conditions refer specifically to the absence of a global positioning system to estimate position, a poor vehicle motion model, and absence of magnetic field to estimate absolute heading. These conditions are present in terrestrial environments where GPS satellite reception is occluded by surrounding structures and magnetic inference affects compass measurements. Similar conditions are anticipated in extra-terrestrial environments such as on Titan which lacks the infrastructure necessary for traditional positioning sensors and the unstable magnetic core renders compasses useless. This work develops a solution to the SLAM problem that utilizes shore features coupled with information about the depth of the water column. The approach is validated experimentally using an autonomous surface vehicle utilizing omnidirectional video and SONAR, results are compared to GPS ground truth

    3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments

    Get PDF
    Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation

    A collaborative monocular visual simultaneous localization and mapping solution to generate a semi-dense 3D map.

    Get PDF
    The utilization and generation of indoor maps are critical in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques used for such map generation. In SLAM, an agent generates a map of an unknown environment while approximating its own location in it. The prevalence and afford-ability of cameras encourage the use of Monocular Visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of indoor maps, thus requiring a distributed computational framework. Each agent generates its own local map, which can then be combined with those of other agents into a map covering a larger area. In doing so, they cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of collaborative SLAM is identifying overlapping maps, especially when the relative starting positions of the agents are unknown. We propose a system comprised of multiple monocular agents with unknown relative starting positions to generate a semi-dense global map of the environment

    A review of sensor technology and sensor fusion methods for map-based localization of service robot

    Get PDF
    Service robot is currently gaining traction, particularly in hospitality, geriatric care and healthcare industries. The navigation of service robots requires high adaptability, flexibility and reliability. Hence, map-based navigation is suitable for service robot because of the ease in updating changes in environment and the flexibility in determining a new optimal path. For map-based navigation to be robust, an accurate and precise localization method is necessary. Localization problem can be defined as recognizing the robot’s own position in a given environment and is a crucial step in any navigational process. Major difficulties of localization include dynamic changes of the real world, uncertainties and limited sensor information. This paper presents a comparative review of sensor technology and sensor fusion methods suitable for map-based localization, focusing on service robot applications

    Reinforcement Learning with Frontier-Based Exploration via Autonomous Environment

    Full text link
    Active Simultaneous Localisation and Mapping (SLAM) is a critical problem in autonomous robotics, enabling robots to navigate to new regions while building an accurate model of their surroundings. Visual SLAM is a popular technique that uses virtual elements to enhance the experience. However, existing frontier-based exploration strategies can lead to a non-optimal path in scenarios where there are multiple frontiers with similar distance. This issue can impact the efficiency and accuracy of Visual SLAM, which is crucial for a wide range of robotic applications, such as search and rescue, exploration, and mapping. To address this issue, this research combines both an existing Visual-Graph SLAM known as ExploreORB with reinforcement learning. The proposed algorithm allows the robot to learn and optimize exploration routes through a reward-based system to create an accurate map of the environment with proper frontier selection. Frontier-based exploration is used to detect unexplored areas, while reinforcement learning optimizes the robot's movement by assigning rewards for optimal frontier points. Graph SLAM is then used to integrate the robot's sensory data and build an accurate map of the environment. The proposed algorithm aims to improve the efficiency and accuracy of ExploreORB by optimizing the exploration process of frontiers to build a more accurate map. To evaluate the effectiveness of the proposed approach, experiments will be conducted in various virtual environments using Gazebo, a robot simulation software. Results of these experiments will be compared with existing methods to demonstrate the potential of the proposed approach as an optimal solution for SLAM in autonomous robotics.Comment: 23 pages, Journa
    corecore