1 research outputs found

    Testing a biologically-based system for extracting depth from brief monocular 2-D video sequences

    Get PDF
    Knowledge of the 3-D layout in front of a moving robot or vehicle is essential for obstacle avoidance and navigation. Currently the most common methods for acquiring that information rely on ‘active’ technologies which project light into the world (e.g., LIDAR). Some passive (non-emitting) systems use stereo cameras but only a relatively small number of techniques attempt to solve the 3-D layout problem using the information from a single video camera. A single camera offers many advantages such as lighter weight and fewer video streams to process. The visual motion occurring in brief monocular video sequences contains information regarding the movement of the camera and the structure of the scene. Extracting that information is difficult however because it relies on accurate estimates of the image motion velocities (optical flow) and knowledge of the camera motion, especially the heading direction. We have solved these two problems and can now obtain image flow and heading direction using mechanisms based on the properties of motion sensitive neurones in the brain. This allows us to recover depth information from monocular video sequences and here we report on a series of tests that assess the accuracy of this novel approach to 3-D depth recovery
    corecore