140 research outputs found

    Bioinspired engineering of exploration systems for NASA and DoD

    Get PDF
    A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers

    3D object reconstruction using stereo and motion

    Get PDF
    The extraction of reliable range data from images is investigated, considering, as a possible solution, the integration of different sensor modalities. Two different algorithms are used to obtain independent estimates of depth from a sequence of stereo images. The results are integrated on the basis of the uncertainty of each measure. The stereo algorithm uses a coarse-to-fine control strategy to compute disparity. An algorithm for depth-from-motion is used, exploiting the constraint imposed by active motion of the cameras. To obtain a 3D description of the objects, the motion of the cameras is purposefully controlled, in such a manner as to move around the objects in view while the gaze is directed toward a fixed point in space. This egomotion strategy, which is similar to that adopted by the human visuomotor system, allows a better exploration of partially occluded objects and simplifies the motion equations. When tested on real scenes, the algorithm demonstrated a low sensitivity to image noise, mainly due to the integration of independent measures. An experiment performed on a real scene containing several objects is presented

    Learning, Moving, And Predicting With Global Motion Representations

    Get PDF
    In order to effectively respond to and influence the world they inhabit, animals and other intelligent agents must understand and predict the state of the world and its dynamics. An agent that can characterize how the world moves is better equipped to engage it. Current methods of motion computation rely on local representations of motion (such as optical flow) or simple, rigid global representations (such as camera motion). These methods are useful, but they are difficult to estimate reliably and limited in their applicability to real-world settings, where agents frequently must reason about complex, highly nonrigid motion over long time horizons. In this dissertation, I present methods developed with the goal of building more flexible and powerful notions of motion needed by agents facing the challenges of a dynamic, nonrigid world. This work is organized around a view of motion as a global phenomenon that is not adequately addressed by local or low-level descriptions, but that is best understood when analyzed at the level of whole images and scenes. I develop methods to: (i) robustly estimate camera motion from noisy optical flow estimates by exploiting the global, statistical relationship between the optical flow field and camera motion under projective geometry; (ii) learn representations of visual motion directly from unlabeled image sequences using learning rules derived from a formulation of image transformation in terms of its group properties; (iii) predict future frames of a video by learning a joint representation of the instantaneous state of the visual world and its motion, using a view of motion as transformations of world state. I situate this work in the broader context of ongoing computational and biological investigations into the problem of estimating motion for intelligent perception and action

    Insect inspired behaviours for the autonomous control of mobile robots

    Full text link
    Animals navigate through various uncontrolled environments with seemingly little effort. Flying insects, especially, are quite adept at manoeuvring in complex, unpredictable and possibly hostile environments. Through both simulation and real-world experiments, we demonstrate the feasibility of equipping a mobile robot with the ability to navigate a corridor environment, in real time, using principles based on insect-based visual guidance. In particular we have used the bees&rsquo; navigational strategy of measuring object range in terms of image velocity. We have also shown the viability and usefulness of various other insect behaviours: (i) keeping walls equidistant, (ii) slowing down when approaching an object, (iii) regulating speed according to tunnel width, and (iv) using visual motion as a measure of distance travelled.<br /

    Depth map from the combination of matched points with active contours

    Get PDF
    IEEE Intelligent Vehicles Symposium (IVS), 2000, Dearborn (EE.UU.)This paper describes the analysis of an active contour fitted to a target in a sequence of images recorded by a freely moving uncalibrated camera. The motivating application is the visual guidance of a robot towards a target. Contour deformations are analysed to extract the scaled depth of the target, and to explore the feasibility of 3D egomotion recovery. The scaled depth is used to compute the time to contact, which provides a measure of distance to the target, and also to improve the common depth maps obtained from point matches, which are a valuable input for the robot to avoid obstacles.This work was supported by the project 'Navegación basada en visión de robots autónomos en entornos no estructurados.' (070-724).Peer Reviewe

    Optic-flow-based Navigation for Ultralight Indoor Aircraft

    Get PDF
    The goal of this project is to develop an autonomous microflyer capable of navigating within houses or small built environments using vision as main source of information. Flying indoor implies a number of challenges that are not found in outdoor autonomous flight. These include small size and slow speed for maneuverability, light weight to stay airborne, low-consumption electronics, and smart sensing and control to fly in textured environment

    Perceived Surface Slant Is Systematically Biased in the Actively-Generated Optic Flow

    Get PDF
    Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant
    • …
    corecore