12,239 research outputs found

    Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots

    Get PDF
    A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration

    Get PDF
    Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration

    Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    Full text link
    We present a novel stereo vision algorithm that is capable of obstacle detection on a mobile-CPU processor at 120 frames per second. Our system performs a subset of standard block-matching stereo processing, searching only for obstacles at a single depth. By using an onboard IMU and state-estimator, we can recover the position of obstacles at all other depths, building and updating a full depth-map at framerate. Here, we describe both the algorithm and our implementation on a high-speed, small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system requires no external sensing or computation and is, to the best of our knowledge, the first high-framerate stereo detection system running onboard a small UAV
    • …
    corecore