12,488 research outputs found

    A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST

    Full text link
    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)

    Biologically inspired composite image sensor for deep field target tracking

    Get PDF
    The use of nonuniform image sensors in mobile based computer vision applications can be an effective solution when computational burden is problematic. Nonuniform image sensors are still in their infancy and as such have not been fully investigated for their unique qualities nor have they been extensively applied in practice. In this dissertation a system has been developed that can perform vision tasks in both the far field and the near field. In order to accomplish this, a new and novel image sensor system has been developed. Inspired by the biological aspects of the visual systems found in both falcons and primates, a composite multi-camera sensor was constructed. The sensor provides for expandable visual range, excellent depth of field, and produces a single compact output image based on the log-polar retinal-cortical mapping that occurs in primates. This mapping provides for scale and rotational tolerant processing which, in turn, supports the mitigation of perspective distortion found in strict Cartesian based sensor systems. Furthermore, the scale-tolerant representation of objects moving on trajectories parallel to the sensor\u27s optical axis allows for fast acquisition and tracking of objects moving at high rates of speed. In order to investigate how effective this combination would be for object detection and tracking at both near and far field, the system was tuned for the application of vehicle detection and tracking from a moving platform. Finally, it was shown that the capturing of license plate information in an autonomous fashion could easily be accomplished from the extraction of information contained in the mapped log-polar representation space. The novel composite log-polar deep-field image sensor opens new horizons for computer vision. This current work demonstrates features that can benefit applications beyond the high-speed vehicle tracking for drivers assistance and license plate capture. Some of the future applications envisioned include obstacle detection for high-speed trains, computer assisted aircraft landing, and computer assisted spacecraft docking

    Study of Convolutional Neural Networks for Global Parametric Motion Estimation on Log-Polar Imagery

    Get PDF
    [EN] The problem of motion estimation from images has been widely studied in the past. Although many mature solutions exist, there are still open issues and challenges to be addressed. For instance, in spite of the well-known performance of convolutional neural networks (CNNs) in many computer vision problems, only very recent work has started to explore CNNs to learning to estimate motion, as an alternative to manually-designed algorithms. These few initial efforts, however, have focused on conventional Cartesian images, while other imaging models have not been studied. This work explores the yet unknown role of CNNs in estimating global parametric motion in log-polar images. Despite its favourable properties, estimating some motion components in this model has proven particularly challenging with past approaches. It is therefore highly important to understand how CNNs behave when their input are log-polar images, since they involve a complex mapping in the motion model, a polar image geometry, and space-variant resolution. To this end, a CNN is considered in this work for regressing the motion parameters. Experiments on existing image datasets using synthetic image deformations reveal that, interestingly, standard CNNs can successfully learn to estimate global parametric motion on log-polar images with accuracies comparable to or better than with Cartesian images.This work was supported in part by the Universitat Jaume I, Castellon, Spain, through the Pla de promocio de la investigacio, under Project UJI-B2018-44; and in part by the Spanish Ministerio de Ciencia, Innovacion y Universidades through the Research Network under Grant RED2018-102511-T.Traver, VJ.; Paredes Palacios, R. (2020). Study of Convolutional Neural Networks for Global Parametric Motion Estimation on Log-Polar Imagery. IEEE Access. 8:149122-149132. https://doi.org/10.1109/ACCESS.2020.3016030S149122149132

    Hardware-software integration for particle light scatter imaging

    Get PDF
    The main purpose of this research is the implementation of a software interface. This interface shall allow the interpretation of particle size in a medium with respect to its diffraction patterns. The literature shows extensive work on the theory of light scattering but the experiments are cumbersome to implement. Some initial work has required the levitation of particle to isolate the difficulties associated with a flow environment. The purpose of this work; however, will focus on the software requirements to synchronize, collect and analyze light scattering patterns. Although there are many other ways of sizing particles, it may be useful to prove the feasibility of the well-defined theory in a flow environment. The light scattering signatures from an illuminated particle is abundantly used in the flow cytometry area but are obtained from other mean in capturing light information. The present study could determine the specific angles of interest allowing the discrimination by size of various types of particles (e.g.. blood cells)

    The Research Unit VolImpact: Revisiting the volcanic impact on atmosphere and climate – preparations for the next big volcanic eruption

    Get PDF
    This paper provides an overview of the scientific background and the research objectives of the Research Unit “VolImpact” (Revisiting the volcanic impact on atmosphere and climate – preparations for the next big volcanic eruption, FOR 2820). VolImpact was recently funded by the Deutsche Forschungsgemeinschaft (DFG) and started in spring 2019. The main goal of the research unit is to improve our understanding of how the climate system responds to volcanic eruptions. Such an ambitious program is well beyond the capabilities of a single research group, as it requires expertise from complementary disciplines including aerosol microphysical modelling, cloud physics, climate modelling, global observations of trace gas species, clouds and stratospheric aerosols. The research goals will be achieved by building on important recent advances in modelling and measurement capabilities. Examples of the advances in the observations include the now daily near-global observations of multi-spectral aerosol extinction from the limb-scatter instruments OSIRIS, SCIAMACHY and OMPS-LP. In addition, the recently launched SAGE III/ISS and upcoming satellite missions EarthCARE and ALTIUS will provide high resolution observations of aerosols and clouds. Recent improvements in modeling capabilities within the framework of the ICON model family now enable simulations at spatial resolutions fine enough to investigate details of the evolution and dynamics of the volcanic eruptive plume using the large-eddy resolving version, up to volcanic impacts on larger-scale circulation systems in the general circulation model version. When combined with state-of-the-art aerosol and cloud microphysical models, these approaches offer the opportunity to link eruptions directly to their climate forcing. These advances will be exploited in VolImpact to study the effects of volcanic eruptions consistently over the full range of spatial and temporal scales involved, addressing the initial development of explosive eruption plumes (project VolPlume), the variation of stratospheric aerosol particle size and radiative forcing caused by volcanic eruptions (VolARC), the response of clouds (VolCloud), the effects of volcanic eruptions on atmospheric dynamics (VolDyn), as well as their climate impact (VolClim)

    Object tracking using log-polar transformation

    Get PDF
    In this thesis, we use log-polar transform to solve object tracking. Object tracking in video sequences is a fundamental problem in computer vision. Even though object tracking is being studied extensively, still some challenges need to be addressed, such as appearance variations, large scale and rotation variations, and occlusion. We implemented a novel tracking algorithm which works robustly in the presence of large scale changes, rotation, occlusion, illumination changes, perspective transformations and some appearance changes. Log-polar transformation is used to achieve robustness to scale and rotation. Our object tracking approach is based on template matching technique. Template matching is based on extracting an example image, template, of an object in first frame, and then finding the region which best suites this template in the subsequent frames. In template matching, we implemented a fixed template algorithm and a template update algorithm. In the fixed template algorithm we use same template for the entire image sequence, where as in the template update algorithm the template is updated according to the changes in object image. The fixed template algorithm is faster; the template update algorithm is more robust to appearance changes in the object being tracked. The proposed object tracking is highly robust to scale, rotation, illumination changes and occlusion with good implementation speed

    Learning Actions and Control of Focus of Attention with a Log-Polar-like Sensor

    Full text link
    With the long-term goal of reducing the image processing time on an autonomous mobile robot in mind we explore in this paper the use of log-polar like image data with gaze control. The gaze control is not done on the Cartesian image but on the log-polar like image data. For this we start out from the classic deep reinforcement learning approach for Atari games. We extend an A3C deep RL approach with an LSTM network, and we learn the policy for playing three Atari games and a policy for gaze control. While the Atari games already use low-resolution images of 80 by 80 pixels, we are able to further reduce the amount of image pixels by a factor of 5 without losing any gaming performance
    • …
    corecore