13 research outputs found

    Design and control of SLIDER: an ultra-lightweight, knee-less, low-cost bipedal walking robot

    Get PDF
    Most state-of-the-art bipedal robots are designed to be highly anthropomorphic and therefore possess legs with knees. Whilst this facilitates more human-like locomotion, there are implementation issues that make walking with straight or near-straight legs difficult. Most bipedal robots have to move with a constant bend in the legs to avoid singularities at the knee joints, and to keep the centre of mass at a constant height for control purposes. Furthermore, having a knee on the leg increases the design complexity as well as the weight of the leg, hindering the robot’s performance in agile behaviours such as running and jumping. We present SLIDER, an ultra-lightweight, low-cost bipedal walking robot with a novel knee-less leg design. This nonanthropomorphic straight-legged design reduces the weight of the legs significantly whilst keeping the same functionality as anthropomorphic legs. Simulation results show that SLIDER’s low-inertia legs contribute to less vertical motion in the center of mass (CoM) than anthropomorphic robots during walking, indicating that SLIDER’s model is closer to the widely used Inverted Pendulum (IP) model. Finally, stable walking on flat terrain is demonstrated both in simulation and in the physical world, and feedback control is implemented to address challenges with the physical robot

    Casualty detection for mobile rescue robots via ground-projected point clouds

    Get PDF
    In order to operate autonomously, mobile rescue robots need to be able to detect human casualties in disaster situations. In this paper, we propose a novel method for autonomous detection of casualties lying down on the ground based on point-cloud data. This data can be obtained from different sensors, such as an RGB-D camera or a 3D LIDAR sensor. The method is based on a ground-projected point-cloud (GPPC) image to achieve human body shape detection. A preliminary experiment has been conducted using the RANSAC method for floor detection and, the HOG feature and the SVM classifier to detect human body shape. The results show that the proposed method succeeds to identify a casualty from point-cloud data in a wide range of viewing angles

    Implementation 2D EKF-based simultaneous localisation and mapping for mobile robot

    No full text
    The main goal of this project is that the basic EKF-based SLAM operation can be implemented sufficiently for estimating the state of the UGV that is operated in this real environment involving dynamic objects. Several problems in practical implementation of SLAM operation such as processing measurement data, removing bias measurement, extracting landmarks from the measurement data, pre-filtering extracted landmarks and data association in the observed landmarks are observed during the operation of EKF-based SLAM system . In addition, the comparison of EKF-based SLAM operation with dead reckoning operation and Global Positioning System (GPS) are also performed to determine the effectiveness and performance of EKF-based SLAM operation in the real environment

    ResQbot: A mobile rescue robot for casualty extraction

    No full text
    Performing search and rescue missions in disaster-struck environments is challenging. Despite the advances in the robotic search phase of the rescue missions, few works have been focused on the physical casualty extraction phase. In this work, we propose a mobile rescue robot that is capable of performing a safe casualty extraction routine. To perform this routine, this robot adopts a loco-manipulation approach. We have designed and built a mobile rescue robot platform called ResQbot as a proof of concept of the proposed system. We have conducted preliminary experiments using a sensorised human-sized dummy as a victim, to confirm that the platform is capable of performing a safe casualty extraction procedure

    Casualty detection from 3D point cloud data for autonomous ground mobile rescue robots

    No full text
    One of the most important features of mobile rescue robots is the ability to autonomously detect casualties, i.e. human bodies, which are usually lying on the ground. This paper proposes a novel method for autonomously detecting casualties lying on the ground using obtained 3D point-cloud data from an on-board sensor, such as an RGB-D camera or a 3D LIDAR, on a mobile rescue robot. In this method, the obtained 3D point-cloud data is projected onto the detected ground plane, i.e. floor, within the point cloud. Then, this projected point cloud is converted into a grid-map that is used afterwards as an input for the algorithm to detect human body shapes. The proposed method is evaluated by performing detections of a human dummy, placed in different random positions and orientations, using an on-board RGB-D camera on a mobile rescue robot called ResQbot. To evaluate the robustness of the casualty detection method to different camera angles, the orientation of the camera is set to different angles. The experimental results show that using the point-cloud data from the on-board RGB-D camera, the proposed method successfully detects the casualty in all tested body positions and orientations relative to the on-board camera, as well as in all tested camera angles

    Sim-to-real learning for casualty detection from ground projected point cloud data

    No full text
    This paper addresses the problem of human body detection-particularly a human body lying on the ground (a.k.a. casualty)-using point cloud data. This ability to detect a casualty is one of the most important features of mobile rescue robots, in order for them to be able to operate autonomously. We propose a deep-learning-based casualty detection method using a deep convolutional neural network (CNN). This network is trained to be able to detect a casualty using a point-cloud data input. In the method we propose, the point cloud input is pre-processed to generate a depth image-like ground-projected heightmap. This heightmap is generated based on the projected distance of each point onto the detected ground plane within the point cloud data. The generated heightmap-in image form-is then used as an input for the CNN to detect a human body lying on the ground. To train the neural network, we propose a novel sim-to-real approach, in which the network model is trained using synthetic data obtained in simulation and then tested on real sensor data. To make the model transferable to real data implementations, during the training we adopt specific data augmentation strategies with the synthetic training data. The experimental results show that data augmentation introduced during the training process is essential for improving the performance of the trained model on real data. More specifically, the results demonstrate that the data augmentations on raw point-cloud data have contributed to a considerable improvement of the trained model performance

    Improved energy efficiency via parallel elastic elements for the straight-legged vertically-compliant robot SLIDER

    No full text
    Most state-of-the-art bipedal robots are designed to be anthropomorphic, and therefore possess articulated legs with knees. Whilst this facilitates smoother, human-like locomotion, there are implementation issues that make walking with straight legs difficult. Many robots have to move with a constant bend in the legs to avoid a singularity occurring at the knee joints. The actuators must constantly work to maintain this stance, which can result in the negation of energy-saving techniques employed. Furthermore, vertical compliance disappears when the leg is straight and the robot undergoes high-energy loss events such as impacts from running and jumping, as the impact force travels through the fully extended joints to the hips. In this paper, we attempt to improve energy efficiency in a simple yet effective way: attaching bungee cords as elastic elements in parallel to the legs of a novel, knee-less biped robot SLIDER, and show that the robot’s prismatic hip joints preserve vertical compliance despite the legs being constantly straight. Due to the nonlinear dynamics of the bungee cords and various sources of friction, Bayesian Optimization is utilized to find the optimals configuration of bungee cords that achieves the largest reduction in energy consumption. The optimal solution found saves 15% of the energy consumption compared to the robot configuration without parallel elastic elements. Additional Video: https://youtu.be/ZTaG9−Dz8

    Robot DE NIRO: a human-centered, autonomous, mobile research platform for cognitively-enhanced manipulation

    Get PDF
    We introduceRobot DE NIRO, an autonomous, collaborative, humanoid robot for mobilemanipulation. We built DE NIRO to perform a wide variety of manipulation behaviors, with afocus on pick-and-place tasks. DE NIRO is designed to be used in a domestic environment,especially in support of caregivers working with the elderly. Given this design focus, DE NIRO caninteract naturally, reliably, and safely with humans, autonomously navigate through environmentson command, intelligently retrieve or move target objects, and avoid collisions efficiently. Wedescribe DE NIRO’s hardware and software, including an extensive vision sensor suite of 2Dand 3D LIDARs, a depth camera, and a 360-degree camera rig; two types of custom grippers;and a custom-built exoskeleton called DE VITO. We demonstrate DE NIRO’s manipulationcapabilities in three illustrative challenges: First, we have DE NIRO perform a fetch-an-objectchallenge. Next, we add more cognition to DE NIRO’s object recognition and grasping abilities,confronting it with small objects of unknown shape. Finally, we extend DE NIRO’s capabilitiesinto dual-arm manipulation of larger objects. We put particular emphasis on the features thatenable DE NIRO to interact safely and naturally with humans. Our contribution is in sharinghow a humanoid robot with complex capabilities can be designed and built quickly with off-the-shelf hardware and open-source software. Supplementary material including our code, adocumentation, videos and the CAD models of several hardware parts are openly availableavailable athttps://www.imperial.ac.uk/robot-intelligence/software

    ResQbot 2.0: an improved design of a mobile rescue robot with an inflatable neck securing device for safe casualty extraction

    No full text
    Despite the fact that a large number of research studies have been conducted in the field of search and rescue robotics, significantly little attention has been given to the development of rescue robots capable of performing physical rescue interventions, including loading and transporting victims to a safe zone—i.e. casualty extraction tasks. The aim of this study is to develop a mobile rescue robot that could assist first responders when saving casualties from a danger area by performing a casualty extraction procedure, whilst ensuring that no additional injury is caused by the operation and no additional lives are put at risk. In this paper, we present a novel design of ResQbot 2.0—a mobile rescue robot designed for performing the casualty extraction task. This robot is a stretcher-type casualty extraction robot, which is a significantly improved version of the initial proof-of-concept prototype, ResQbot (retrospectively referred to as ResQbot 1.0), that has been developed in our previous work. The proposed designs and development of the mechanical system of ResQbot 2.0, as well as the method for safely loading a full body casualty onto the robot’s ‘stretcher bed’, are described in detail based on the conducted literature review, evaluation of our previous work and feedback provided by medical professionals. To verify the proposed design and the casualty extraction procedure, we perform simulation experiments in Gazebo physics engine simulator. The simulation results demonstrate the capability of ResQbot 2.0 to successfully carry out safe casualty extraction
    corecore