40,196 research outputs found

    A `bright zone' in male hoverfly (Eristalis tenax) eyes and associated faster motion detection and increased contrast sensitivity

    Get PDF
    Eyes of the hoverfly Eristalis tenax are sexually dimorphic such that males have a fronto-dorsal region of large facets. In contrast to other large flies in which large facets are associated with a decreased interommatidial angle to form a dorsal `acute zone' of increased spatial resolution, we show that a dorsal region of large facets in males appears to form a `bright zone' of increased light capture without substantially increased spatial resolution. Theoretically, more light allows for increased performance in tasks such as motion detection. To determine the effect of the bright zone on motion detection, local properties of wide field motion detecting neurons were investigated using localized sinusoidal gratings. The pattern of local preferred directions of one class of these cells, the HS cells, in Eristalis is similar to that reported for the blowfly Calliphora. The bright zone seems to contribute to local contrast sensitivity; high contrast sensitivity exists in portions of the receptive field served by large diameter facet lenses of males and is not observed in females. Finally, temporal frequency tuning is also significantly faster in this frontal portion of the world, particularly in males, where it overcompensates for the higher spatial-frequency tuning and shifts the predicted local velocity optimum to higher speeds. These results indicate that increased retinal illuminance due to the bright zone of males is used to enhance contrast sensitivity and speed motion detector responses. Additionally, local neural properties vary across the visual world in a way not expected if HS cells serve purely as matched filters to measure yaw-induced visual motion

    The influence of visual landscape on the free flight behavior of the fruit fly Drosophila melanogaster

    Get PDF
    To study the visual cues that control steering behavior in the fruit fly Drosophila melanogaster, we reconstructed three-dimensional trajectories from images taken by stereo infrared video cameras during free flight within structured visual landscapes. Flies move through their environment using a series of straight flight segments separated by rapid turns, termed saccades, during which the fly alters course by approximately 90° in less than 100 ms. Altering the amount of background visual contrast caused significant changes in the fly’s translational velocity and saccade frequency. Between saccades, asymmetries in the estimates of optic flow induce gradual turns away from the side experiencing a greater motion stimulus, a behavior opposite to that predicted by a flight control model based upon optomotor equilibrium. To determine which features of visual motion trigger saccades, we reconstructed the visual environment from the fly’s perspective for each position in the flight trajectory. From these reconstructions, we modeled the fly’s estimation of optic flow on the basis of a two-dimensional array of Hassenstein–Reichardt elementary motion detectors and, through spatial summation, the large-field motion stimuli experienced by the fly during the course of its flight. Event-triggered averages of the large-field motion preceding each saccade suggest that image expansion is the signal that triggers each saccade. The asymmetry in output of the local motion detector array prior to each saccade influences the direction (left versus right) but not the magnitude of the rapid turn. Once initiated, visual feedback does not appear to influence saccade kinematics further. The total expansion experienced before a saccade was similar for flight within both uniform and visually textured backgrounds. In summary, our data suggest that complex behavioral patterns seen during free flight emerge from interactions between the flight control system and the visual environment

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Bioinspired engineering of exploration systems for NASA and DoD

    Get PDF
    A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table

    Contrast sensitivity of insect motion detectors to natural images

    Get PDF
    How do animals regulate self-movement despite large variation in the luminance contrast of the environment? Insects are capable of regulating flight speed based on the velocity of image motion, but the mechanisms for this are unclear. The Hassenstein–Reichardt correlator model and elaborations can accurately predict responses of motion detecting neurons under many conditions but fail to explain the apparent lack of spatial pattern and contrast dependence observed in freely flying bees and flies. To investigate this apparent discrepancy, we recorded intracellularly from horizontal-sensitive (HS) motion detecting neurons in the hoverfly while displaying moving images of natural environments. Contrary to results obtained with grating patterns, we show these neurons encode the velocity of natural images largely independently of the particular image used despite a threefold range of contrast. This invariance in response to natural images is observed in both strongly and minimally motion-adapted neurons but is sensitive to artificial manipulations in contrast. Current models of these cells account for some, but not all, of the observed insensitivity to image contrast. We conclude that fly visual processing may be matched to commonalities between natural scenes, enabling accurate estimates of velocity largely independent of the particular scene

    Deception Detection in Videos

    Full text link
    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action recognition, are also very good at predicting deception in videos. We fuse the score of classifiers trained on IDT features and high-level micro-expressions to improve performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio domain also provide a significant boost in performance, while information from transcripts is not very beneficial for our system. Using various classifiers, our automated system obtains an AUC of 0.877 (10-fold cross-validation) when evaluated on subjects which were not part of the training set. Even though state-of-the-art methods use human annotations of micro-expressions for deception detection, our fully automated approach outperforms them by 5%. When combined with human annotations of micro-expressions, our AUC improves to 0.922. We also present results of a user-study to analyze how well do average humans perform on this task, what modalities they use for deception detection and how they perform if only one modality is accessible. Our project page can be found at \url{https://doubaibai.github.io/DARE/}.Comment: AAAI 2018, project page: https://doubaibai.github.io/DARE

    A biologically inspired spiking model of visual processing for image feature detection

    Get PDF
    To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images
    corecore