129 research outputs found

    An overview of robotics and autonomous systems for harsh environments

    Get PDF
    Across a wide range of industries and applications, robotics and autonomous systems can fulfil the crucial and challenging tasks such as inspection, exploration, monitoring, drilling, sampling and mapping in areas of scientific discovery, disaster prevention, human rescue and infrastructure management, etc. However, in many situations, the associated environment is either too dangerous or inaccessible to humans. Hence, a wide range of robots have been developed and deployed to replace or aid humans in these activities. A look at these harsh environment applications of robotics demonstrate the diversity of technologies developed. This paper reviews some key application areas of robotics that involve interactions with harsh environments (such as search and rescue, space exploration, and deep-sea operations), gives an overview of the developed technologies and provides a discussion of the key trends and future directions common to many of these areas

    Comparative Evaluation of RGB-D SLAM Methods for Humanoid Robot Localization and Mapping

    Full text link
    In this paper, we conducted a comparative evaluation of three RGB-D SLAM (Simultaneous Localization and Mapping) algorithms: RTAB-Map, ORB-SLAM3, and OpenVSLAM for SURENA-V humanoid robot localization and mapping. Our test involves the robot to follow a full circular pattern, with an Intel RealSense D435 RGB-D camera installed on its head. In assessing localization accuracy, ORB-SLAM3 outperformed the others with an ATE of 0.1073, followed by RTAB-Map at 0.1641 and OpenVSLAM at 0.1847. However, it should be noted that both ORB-SLAM3 and OpenVSLAM faced challenges in maintaining accurate odometry when the robot encountered a wall with limited feature points. Nevertheless, OpenVSLAM demonstrated the ability to detect loop closures and successfully relocalize itself within the map when the robot approached its initial location. The investigation also extended to mapping capabilities, where RTAB-Map excelled by offering diverse mapping outputs, including dense, OctoMap, and occupancy grid maps. In contrast, both ORB-SLAM3 and OpenVSLAM provided only sparse maps.Comment: 6 pages, 11th RSI International Conference on Robotics and Mechatronics (ICRoM 2023

    Proprioceptive Invariant Robot State Estimation

    Full text link
    This paper reports on developing a real-time invariant proprioceptive robot state estimation framework called DRIFT. A didactic introduction to invariant Kalman filtering is provided to make this cutting-edge symmetry-preserving approach accessible to a broader range of robotics applications. Furthermore, this work dives into the development of a proprioceptive state estimation framework for dead reckoning that only consumes data from an onboard inertial measurement unit and kinematics of the robot, with two optional modules, a contact estimator and a gyro filter for low-cost robots, enabling a significant capability on a variety of robotics platforms to track the robot's state over long trajectories in the absence of perceptual data. Extensive real-world experiments using a legged robot, an indoor wheeled robot, a field robot, and a full-size vehicle, as well as simulation results with a marine robot, are provided to understand the limits of DRIFT

    Visual perception system and method for a humanoid robot

    Get PDF
    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image

    Robust Legged Robot State Estimation Using Factor Graph Optimization

    Full text link
    Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robot's state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematic-inertial odometry.Comment: 8 pages, 12 figures. Accepted to RA-L + IROS 2019, July 201

    Hand-worn Haptic Interface for Drone Teleoperation

    Full text link
    Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.Comment: Accepted at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Neural Network based Robot 3D Mapping and Navigation using Depth Image Camera

    Get PDF
    Robotics research has been developing rapidly in the past decade. However, in order to bring robots into household or office environments and cooperate well with humans, it is still required more research works. One of the main problems is robot localization and navigation. To be able to accomplish its missions, the mobile robot needs to solve problems of localizing itself in the environment, finding the best path and navigate to the goal. The navigation methods can be categorized into map-based navigation and map-less navigation. In this research we propose a method based on neural networks, using a depth image camera to solve the robot navigation problem. By using a depth image camera, the surrounding environment can be recognized regardless of the lighting conditions. A neural network-based approach is fast enough for robot navigation in real-time which is important to develop the full autonomous robots.In our method, mapping and annotating of the surrounding environment is done by the robot using a Feed-Forward Neural Network and a CNN network. The 3D map not only contains the geometric information of the environments but also their semantic contents. The semantic contents are important for robots to accomplish their tasks. For instance, consider the task “Go to cabinet to take a medicine”. The robot needs to know the position of the cabinet and medicine which is not supplied by solely the geometrical map. A Feed-Forward Neural Network is trained to convert the depth information from depth images into 3D points in real-world coordination. A CNN network is trained to segment the image into classes. By combining the two neural networks, the objects in the environment are segmented and their positions are determined.We implemented the proposed method using the mobile humanoid robot. Initially, the robot moves in the environment and build the 3D map with objects placed in their positions. Then, the robot utilizes the developed 3D map for goal-directed navigation.The experimental results show good performance in terms of the 3D map accuracy and robot navigation. Most of the objects in the working environments are classified by the trained CNN. Un-recognized objects are classified by Feed-Forward Neural Network. As a result, the generated maps reflected exactly working environments and can be applied for robots to safely navigate in them. The 3D geometric maps can be generated regardless of the lighting conditions. The proposed localization method is robust even in texture-less environments which are the toughest environments in the field of vision-based localization.博士(工学)法政大学 (Hosei University

    A probabilistic framework for stereo-vision based 3D object search with 6D pose estimation

    Get PDF
    This paper presents a method whereby an autonomous mobile robot can search for a 3-dimensional (3D) object using an on-board stereo camera sensor mounted on a pan-tilt head. Search efficiency is realized by the combination of a coarse-scale global search coupled with a fine-scale local search. A grid-based probability map is initially generated using the coarse search, which is based on the color histogram of the desired object. Peaks in the probability map are visited in sequence, where a local (refined) search method based on 3D SIFT features is applied to establish or reject the existence of the desired object, and to update the probability map using Bayesian recursion methods. Once found, the 6D object pose is also estimated. Obstacle avoidance during search can be naturally integrated into the method. Experimental results obtained from the use of this method on a mobile robot are presented to illustrate and validate the approach, confirming that the search strategy can be carried out with modest computation

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors
    corecore