181 research outputs found

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology

    Get PDF
    The extraction of accurate self-motion information from the visual world is a difficult problem that has been solved very efficiently by biological organisms utilizing non-linear processing. Previous bio-inspired models for motion detection based on a correlation mechanism have been dogged by issues that arise from their sensitivity to undesired properties of the image, such as contrast, which vary widely between images. Here we present a model with multiple levels of non-linear dynamic adaptive components based directly on the known or suspected responses of neurons within the visual motion pathway of the fly brain. By testing the model under realistic high-dynamic range conditions we show that the addition of these elements makes the motion detection model robust across a large variety of images, velocities and accelerations. Furthermore the performance of the entire system is more than the incremental improvements offered by the individual components, indicating beneficial non-linear interactions between processing stages. The algorithms underlying the model can be implemented in either digital or analog hardware, including neuromorphic analog VLSI, but defy an analytical solution due to their dynamic non-linear operation. The successful application of this algorithm has applications in the development of miniature autonomous systems in defense and civilian roles, including robotics, miniature unmanned aerial vehicles and collision avoidance sensors

    A Motion Detection Algorithm Using Local Phase Information

    Get PDF
    Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm

    Bio-Inspired Motion Vision for Aerial Course Control

    No full text

    A neurobiological and computational analysis of target discrimination in visual clutter by the insect visual system.

    Get PDF
    Some insects have the capability to detect and track small moving objects, often against cluttered moving backgrounds. Determining how this task is performed is an intriguing challenge, both from a physiological and computational perspective. Previous research has characterized higher-order neurons within the fly brain known as 'small target motion detectors‘ (STMD) that respond selectively to targets, even within complex moving surrounds. Interestingly, these cells still respond robustly when the velocity of the target is matched to the velocity of the background (i.e. with no relative motion cues). We performed intracellular recordings from intermediate-order neurons in the fly visual system (the medulla). These full-wave rectifying, transient cells (RTC) reveal independent adaptation to luminance changes of opposite signs (suggesting separate 'on‘ and 'off‘ channels) and fast adaptive temporal mechanisms (as seen in some previously described cell types). We show, via electrophysiological experiments, that the RTC is temporally responsive to rapidly changing stimuli and is well suited to serving an important function in a proposed target-detecting pathway. To model this target discrimination, we use high dynamic range (HDR) natural images to represent 'real-world‘ luminance values that serve as inputs to a biomimetic representation of photoreceptor processing. Adaptive spatiotemporal high-pass filtering (1st-order interneurons) shapes the transient 'edge-like‘ responses, useful for feature discrimination. Following this, a model for the RTC implements a nonlinear facilitation between the rapidly adapting, and independent polarity contrast channels, each with centre-surround antagonism. The recombination of the channels results in increased discrimination of small targets, of approximately the size of a single pixel, without the need for relative motion cues. This method of feature discrimination contrasts with traditional target and background motion-field computations. We show that our RTC-based target detection model is well matched to properties described for the higher-order STMD neurons, such as contrast sensitivity, height tuning and velocity tuning. The model output shows that the spatiotemporal profile of small targets is sufficiently rare within natural scene imagery to allow our highly nonlinear 'matched filter‘ to successfully detect many targets from the background. The model produces robust target discrimination across a biologically plausible range of target sizes and a range of velocities. We show that the model for small target motion detection is highly correlated to the velocity of the stimulus but not other background statistics, such as local brightness or local contrast, which normally influence target detection tasks. From an engineering perspective, we examine model elaborations for improved target discrimination via inhibitory interactions from correlation-type motion detectors, using a form of antagonism between our feature correlator and the more typical motion correlator. We also observe that a changing optimal threshold is highly correlated to the value of observer ego-motion. We present an elaborated target detection model that allows for implementation of a static optimal threshold, by scaling the target discrimination mechanism with a model-derived velocity estimation of ego-motion. Finally, we investigate the physiological relevance of this target discrimination model. We show that via very subtle image manipulation of the visual stimulus, our model accurately predicts dramatic changes in observed electrophysiological responses from STMD neurons.Thesis (Ph.D.) - University of Adelaide, School of Molecular and Biomedical Science, 200

    Measurment of spatial orientation using a biologically plausible gradient model

    Get PDF
    A Thesis submitted for the degree of Doctor of Philosophy

    Learning, Moving, And Predicting With Global Motion Representations

    Get PDF
    In order to effectively respond to and influence the world they inhabit, animals and other intelligent agents must understand and predict the state of the world and its dynamics. An agent that can characterize how the world moves is better equipped to engage it. Current methods of motion computation rely on local representations of motion (such as optical flow) or simple, rigid global representations (such as camera motion). These methods are useful, but they are difficult to estimate reliably and limited in their applicability to real-world settings, where agents frequently must reason about complex, highly nonrigid motion over long time horizons. In this dissertation, I present methods developed with the goal of building more flexible and powerful notions of motion needed by agents facing the challenges of a dynamic, nonrigid world. This work is organized around a view of motion as a global phenomenon that is not adequately addressed by local or low-level descriptions, but that is best understood when analyzed at the level of whole images and scenes. I develop methods to: (i) robustly estimate camera motion from noisy optical flow estimates by exploiting the global, statistical relationship between the optical flow field and camera motion under projective geometry; (ii) learn representations of visual motion directly from unlabeled image sequences using learning rules derived from a formulation of image transformation in terms of its group properties; (iii) predict future frames of a video by learning a joint representation of the instantaneous state of the visual world and its motion, using a view of motion as transformations of world state. I situate this work in the broader context of ongoing computational and biological investigations into the problem of estimating motion for intelligent perception and action
    corecore