8,904 research outputs found

    Cellular neural networks for motion estimation and obstacle detection

    Get PDF
    Obstacle detection is an important part of Video Processing because it is indispensable for a collision prevention of autonomously navigating moving objects. For example, vehicles driving without human guidance need a robust prediction of potential obstacles, like other vehicles or pedestrians. Most of the common approaches of obstacle detection so far use analytical and statistical methods like motion estimation or generation of maps. In the first part of this contribution a statistical algorithm for obstacle detection in monocular video sequences is presented. The proposed procedure is based on a motion estimation and a planar world model which is appropriate to traffic scenes. The different processing steps of the statistical procedure are a feature extraction, a subsequent displacement vector estimation and a robust estimation of the motion parameters. Since the proposed procedure is composed of several processing steps, the error propagation of the successive steps often leads to inaccurate results. In the second part of this contribution it is demonstrated, that the above mentioned problems can be efficiently overcome by using Cellular Neural Networks (CNN). It will be shown, that a direct obstacle detection algorithm can be easily performed, based only on CNN processing of the input images. Beside the enormous computing power of programmable CNN based devices, the proposed method is also very robust in comparison to the statistical method, because is shows much less sensibility to noisy inputs. Using the proposed approach of obstacle detection in planar worlds, a real time processing of large input images has been made possible

    FieldSAFE: Dataset for Obstacle Detection in Agriculture

    Full text link
    In this paper, we present a novel multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 hours of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360-degree camera, lidar, and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present including humans, mannequin dolls, rocks, barrels, buildings, vehicles, and vegetation. All obstacles have ground truth object labels and geographic coordinates.Comment: Submitted to special issue of MDPI Sensors: Sensors in Agricultur

    Dynamic Obstacle Detection

    Get PDF
    The Smart Cane was designed as an enhancement for the traditional white cane used by the visually impaired for navigation. While the traditional white cane is effective in navigating ground level obstacles such as pits, stairs and so on and so forth, it is significantly inefficient in detecting obstacles above knee height, such as trucks, cars and so on. To solve this shortcoming, a group of students under their post-doctoral guide in Indian Institute of Technology, Delhi created Smart Cane, an add-on for the existing white cane that used ultrasonic ranging to determine the nearest threats to the user and set up an advance warning system for the same. It uses a tactile feedback system to warn the user of approaching obstacles with an effective range of 0.5 – 1.8 / 3 m. It has two modes: short range and long range which correspond to the variable maximum ranges, respectively. It is also significantly cheaper than the alternative walking aid enhancements offered by a variety of companies and boasts of a long battery life (Preliminary tests claim that the aid can last for up to a week with only a single charge of three to four hours). However, the Smart Cane has its own drawback in the sense that it is unable to warn its user of moving vehicles, such as cars, bikes and the like which possess a significant threat to the visually impaired given that they are unable to detect them and are hence at constant danger while navigating crowded roads. The goal of this project is to supplant the existing Smart Cane with an additional IR sensor that makes it capable of detecting moving vehicles coming towards the user and warn the user if it is a threat to the wellbeing of the user. It functions in the “toward” mode, i.e. it only detects the objects coming towards the user and has an effective range of 250 m in optimum conditions. It uses the existing tactile feedback system to warn the user of approaching dangers. It relies on battery slightly more intensively than the ultrasonic sensor, but the usage can be optimized to minimize the battery drain

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    J-MOD2^{2}: Joint Monocular Obstacle Detection and Depth Estimation

    Full text link
    In this work, we propose an end-to-end deep architecture that jointly learns to detect obstacles and estimate their depth for MAV flight applications. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. However, for the task of avoiding obstacles this level of complexity is not required. Recent works have proposed multi task architectures to both perform scene understanding and depth estimation. We follow their track and propose a specific architecture to jointly estimate depth and obstacles, without the need to compute a global map, but maintaining compatibility with a global SLAM system if needed. The network architecture is devised to exploit the joint information of the obstacle detection task, that produces more reliable bounding boxes, with the depth estimation one, increasing the robustness of both to scenario changes. We call this architecture J-MOD2^{2}. We test the effectiveness of our approach with experiments on sequences with different appearance and focal lengths and compare it to SotA multi task methods that jointly perform semantic segmentation and depth estimation. In addition, we show the integration in a full system using a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model

    Event-Based Obstacle Detection with Commercial Lidar

    Get PDF
    Computerized obstacle detection for moving vehicles is becoming more important as vehicle manufacturers make their systems more autonomous and safe. However, obstacle detection must operate quickly in dynamic environments such as driving at highway speeds. A unique obstacle detection system using 3D changes in the environment is proposed. Furthermore, these 3D changes are shown to contain sufficient information for avoiding obstacles. To make the system easy to integrate onto a vehicle, additional processing is implemented to remove unnecessary dependencies. This system provides a method for obstacle detection that breaks away from typical systems to be more efficient

    Analysis and design of a capsule landing system and surface vehicle control system for Mars exploration

    Get PDF
    Problems related to the design and control of a mobile planetary vehicle to implement a systematic plan for the exploration of Mars are reported. Problem areas include: vehicle configuration, control, dynamics, systems and propulsion; systems analysis, terrain modeling and path selection; and chemical analysis of specimens. These tasks are summarized: vehicle model design, mathematical model of vehicle dynamics, experimental vehicle dynamics, obstacle negotiation, electrochemical controls, remote control, collapsibility and deployment, construction of a wheel tester, wheel analysis, payload design, system design optimization, effect of design assumptions, accessory optimal design, on-board computer subsystem, laser range measurement, discrete obstacle detection, obstacle detection systems, terrain modeling, path selection system simulation and evaluation, gas chromatograph/mass spectrometer system concepts, and chromatograph model evaluation and improvement

    A parallel implementation of a multisensor feature-based range-estimation method

    Get PDF
    There are many proposed vision based methods to perform obstacle detection and avoidance for autonomous or semi-autonomous vehicles. All methods, however, will require very high processing rates to achieve real time performance. A system capable of supporting autonomous helicopter navigation will need to extract obstacle information from imagery at rates varying from ten frames per second to thirty or more frames per second depending on the vehicle speed. Such a system will need to sustain billions of operations per second. To reach such high processing rates using current technology, a parallel implementation of the obstacle detection/ranging method is required. This paper describes an efficient and flexible parallel implementation of a multisensor feature-based range-estimation algorithm, targeted for helicopter flight, realized on both a distributed-memory and shared-memory parallel computer
    corecore