9,847 research outputs found

    A Depth Space Approach for Evaluating Distance to Objects -- with Application to Human-Robot Collision Avoidance

    Get PDF
    We present a novel approach to estimate the distance between a generic point in the Cartesian space and objects detected with a depth sensor. This information is crucial in many robotic applications, e.g., for collision avoidance, contact point identification, and augmented reality. The key idea is to perform all distance evaluations directly in the depth space. This allows distance estimation by considering also the frustum generated by the pixel on the depth image, which takes into account both the pixel size and the occluded points. Different techniques to aggregate distance data coming from multiple object points are proposed. We compare the Depth space approach with the commonly used Cartesian space or Configuration space approaches, showing that the presented method provides better results and faster execution times. An application to human-robot collision avoidance using a KUKA LWR IV robot and a Microsoft Kinect sensor illustrates the effectiveness of the approach

    A Hierarchal Planning Framework for AUV Mission Management in a Spatio-Temporal Varying Ocean

    Full text link
    The purpose of this paper is to provide a hierarchical dynamic mission planning framework for a single autonomous underwater vehicle (AUV) to accomplish task-assign process in a limited time interval while operating in an uncertain undersea environment, where spatio-temporal variability of the operating field is taken into account. To this end, a high level reactive mission planner and a low level motion planning system are constructed. The high level system is responsible for task priority assignment and guiding the vehicle toward a target of interest considering on-time termination of the mission. The lower layer is in charge of generating optimal trajectories based on sequence of tasks and dynamicity of operating terrain. The mission planner is able to reactively re-arrange the tasks based on mission/terrain updates while the low level planner is capable of coping unexpected changes of the terrain by correcting the old path and re-generating a new trajectory. As a result, the vehicle is able to undertake the maximum number of tasks with certain degree of maneuverability having situational awareness of the operating field. The computational engine of the mentioned framework is based on the biogeography based optimization (BBO) algorithm that is capable of providing efficient solutions. To evaluate the performance of the proposed framework, firstly, a realistic model of undersea environment is provided based on realistic map data, and then several scenarios, treated as real experiments, are designed through the simulation study. Additionally, to show the robustness and reliability of the framework, Monte-Carlo simulation is carried out and statistical analysis is performed. The results of simulations indicate the significant potential of the two-level hierarchical mission planning system in mission success and its applicability for real-time implementation

    Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision

    Get PDF
    In order to improve usability and safety, modern unmanned aerial vehicles (UAVs) are equipped with sensors to monitor the environment, such as laser-scanners and cameras. One important aspect in this monitoring process is to detect obstacles in the flight path in order to avoid collisions. Since a large number of consumer UAVs suffer from tight weight and power constraints, our work focuses on obstacle avoidance based on a lightweight stereo camera setup. We use disparity maps, which are computed from the camera images, to locate obstacles and to automatically steer the UAV around them. For disparity map computation we optimize the well-known semi-global matching (SGM) approach for the deployment on an embedded FPGA. The disparity maps are then converted into simpler representations, the so called U-/V-Maps, which are used for obstacle detection. Obstacle avoidance is based on a reactive approach which finds the shortest path around the obstacles as soon as they have a critical distance to the UAV. One of the fundamental goals of our work was the reduction of development costs by closing the gap between application development and hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for porting our algorithms, which are written in C/C++, to the embedded FPGA. We evaluated our implementation of the disparity estimation on the KITTI Stereo 2015 benchmark. The integrity of the overall realtime reactive obstacle avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Scienc

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
    • …
    corecore