22,980 research outputs found

    Neural networks application to divergence-based passive ranging

    Get PDF
    The purpose of this report is to summarize the state of knowledge and outline the planned work in divergence-based/neural networks approach to the problem of passive ranging derived from optical flow. Work in this and closely related areas is reviewed in order to provide the necessary background for further developments. New ideas about devising a monocular passive-ranging system are then introduced. It is shown that image-plan divergence is independent of image-plan location with respect to the focus of expansion and of camera maneuvers because it directly measures the object's expansion which, in turn, is related to the time-to-collision. Thus, a divergence-based method has the potential of providing a reliable range complementing other monocular passive-ranging methods which encounter difficulties in image areas close to the focus of expansion. Image-plan divergence can be thought of as some spatial/temporal pattern. A neural network realization was chosen for this task because neural networks have generally performed well in various other pattern recognition applications. The main goal of this work is to teach a neural network to derive the divergence from the imagery

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Depth Image Processing for Obstacle Avoidance of an Autonomous VTOL UAV

    Get PDF
    We describe a new approach for stereo-based obstacle avoidance. This method analyzes the images of a stereo camera in realtime and searches for a safe target point that can be reached without collision. The obstacle avoidance system is used by our unmanned helicopter ARTIS (Autonomous Rotorcraft Testbed for Intelligent Systems) and its simulation environment. It is optimized for this UAV, but not limited to aircraft systems

    OMEGA AND BIASING FROM OPTICAL GALAXIES VERSUS POTENT MASS

    Full text link
    The mass density field in the local universe, recovered by the POTENT method from peculiar velocities of \sim3000 galaxies, is compared with the density field of optically-selected galaxies. Both density fields are smoothed with a Gaussian filter of radius 12 h1h^{-1} Mpc. Under the assumptions of gravitational instability and a linear biasing parameter b\sbo between optical galaxies and mass, we obtain \beta\sbo \equiv \om^{0.6}/b\sbo = 0.74 \pm 0.13. This result is obtained from a regression of POTENT mass density on optical density after correcting the mass density field for systematic biases in the velocity data and POTENT method. The error quoted is just the 1σ1\sigma formal error estimated from the observed scatter in the density--density scatterplot; it does not include the uncertainty due to cosmic scatter in the mean density or in the biasing relation. We do not attempt a formal analysis of the goodness of fit, but the scatter about the fit is consistent with our estimates of the uncertainties.Comment: Final revised version (minor typos corrected). 13 pages, gzipped tar file containing LaTeX and figures. The Postscript file is available at ftp://dust0.dur.ac.uk/pub/mjh/potopt/potopt.ps.Z or (gzipped) at ftp://xxx.lanl.gov/astro-ph/ps/9501/9501074.ps.gz or via WWW at http://xxx.lanl.gov/ps/astro-ph/9501074 or as separate LaTeX text and encapsulated Postscript figures in a compressed tar'd file at ftp://dust0.dur.ac.uk/pub/mjh/potopt/latex/potopt.tar.

    Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

    Full text link
    Robust velocity and position estimation is crucial for autonomous robot navigation. The optical flow based methods for autonomous navigation have been receiving increasing attentions in tandem with the development of micro unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC) based algorithm to determine optical flow using a monocular camera, which is named as correlation flow (CF). Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur. In addition, it can also estimate the altitude velocity and yaw rate, which are not available by traditional methods. Autonomous flight tests on a quadcopter show that correlation flow can provide robust trajectory estimation with very low processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Machine vision and the OMV

    Get PDF
    The orbital Maneuvering Vehicle (OMV) is intended to close with orbiting targets for relocation or servicing. It will be controlled via video signals and thruster activation based upon Earth or space station directives. A human operator is squarely in the middle of the control loop for close work. Without directly addressing future, more autonomous versions of a remote servicer, several techniques that will doubtless be important in a future increase of autonomy also have some direct application to the current situation, particularly in the area of image enhancement and predictive analysis. Several techniques are presentet, and some few have been implemented, which support a machine vision capability proposed to be adequate for detection, recognition, and tracking. Once feasibly implemented, they must then be further modified to operate together in real time. This may be achieved by two courses, the use of an array processor and some initial steps toward data reduction. The methodology or adapting to a vector architecture is discussed in preliminary form, and a highly tentative rationale for data reduction at the front end is also discussed. As a by-product, a working implementation of the most advanced graphic display technique, ray-casting, is described
    corecore