6,883 research outputs found

    A Depth Space Approach for Evaluating Distance to Objects -- with Application to Human-Robot Collision Avoidance

    Get PDF
    We present a novel approach to estimate the distance between a generic point in the Cartesian space and objects detected with a depth sensor. This information is crucial in many robotic applications, e.g., for collision avoidance, contact point identification, and augmented reality. The key idea is to perform all distance evaluations directly in the depth space. This allows distance estimation by considering also the frustum generated by the pixel on the depth image, which takes into account both the pixel size and the occluded points. Different techniques to aggregate distance data coming from multiple object points are proposed. We compare the Depth space approach with the commonly used Cartesian space or Configuration space approaches, showing that the presented method provides better results and faster execution times. An application to human-robot collision avoidance using a KUKA LWR IV robot and a Microsoft Kinect sensor illustrates the effectiveness of the approach

    A collision avoidance system for a spaceplane manipulator arm

    Get PDF
    Part of the activity in the area of collision avoidance related to the Hermes spaceplane is reported. A collision avoidance software system which was defined, developed and implemented in this project is presented. It computes the intersection between the solids representing the arm, the payload, and the objects. It is feasible with respect to the resources available on board, considering its performance

    Real-time computation of distance to dynamic obstacles with multiple depth sensors

    Get PDF
    We present an efficient method to evaluate distances between dynamic obstacles and a number of points of interests (e.g., placed on the links of a robot) when using multiple depth cameras. A depth-space oriented discretization of the Cartesian space is introduced that represents at best the workspace monitored by a depth camera, including occluded points. A depth grid map can be initialized off line from the arrangement of the multiple depth cameras, and its peculiar search characteristics allows fusing on line the information given by the multiple sensors in a very simple and fast way. The real-time performance of the proposed approach is shown by means of collision avoidance experiments where two Kinect sensors monitor a human-robot coexistence task

    An Experimental Study on Pitch Compensation in Pedestrian-Protection Systems for Collision Avoidance and Mitigation

    Full text link
    This paper describes an improved stereovision system for the anticipated detection of car-to-pedestrian accidents. An improvement of the previous versions of the pedestrian-detection system is achieved by compensation of the camera's pitch angle, since it results in higher accuracy in the location of the ground plane and more accurate depth measurements. The system has been mounted on two different prototype cars, and several real collision-avoidance and collision-mitigation experiments have been carried out in private circuits using actors and dummies, which represents one of the main contributions of this paper. Collision avoidance is carried out by means of deceleration strategies whenever the accident is avoidable. Likewise, collision mitigation is accomplished by triggering an active hood system

    An advanced telerobotic system for shuttle payload changeout room processing applications

    Get PDF
    To potentially alleviate the inherent difficulties in the ground processing of the Space Shuttle and its associated payloads, a teleoperated, semi-autonomous robotic processing system for the Payload Changeout Room (PCR) is now in the conceptual stages. The complete PCR robotic system as currently conceived is described and critical design issues and the required technologies are discussed

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Safety-related Tasks within the Set-Based Task-Priority Inverse Kinematics Framework

    Full text link
    In this paper we present a framework that allows the motion control of a robotic arm automatically handling different kinds of safety-related tasks. The developed controller is based on a Task-Priority Inverse Kinematics algorithm that allows the manipulator's motion while respecting constraints defined either in the joint or in the operational space in the form of equality-based or set-based tasks. This gives the possibility to define, among the others, tasks as joint-limits, obstacle avoidance or limiting the workspace in the operational space. Additionally, an algorithm for the real-time computation of the minimum distance between the manipulator and other objects in the environment using depth measurements has been implemented, effectively allowing obstacle avoidance tasks. Experiments with a Jaco2^2 manipulator, operating in an environment where an RGB-D sensor is used for the obstacles detection, show the effectiveness of the developed system

    MPC-based humanoid pursuit-evasion in the presence of obstacles

    Get PDF
    We consider a pursuit-evasion problem between humanoids in the presence of obstacles. In our scenario, the pursuer enters the safety area of the evader headed for collision, while the latter executes a fast evasive motion. Control schemes are designed for both the pursuer and the evader. They are structurally identical, although the objectives are different: the pursuer tries to align its direction of motion with the line- of-sight to the evader, whereas the evader tries to move in a direction orthogonal to the line-of-sight to the pursuer. At the core of the control architecture is a Model Predictive Control scheme for generating a stable gait. This allows for the inclusion of workspace obstacles, which we take into account at two levels: during the determination of the footsteps orientation and as an explicit MPC constraint. We illustrate the results with simulations on NAO humanoids

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
    corecore