3,678 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
EyeRIS: A General-Purpose System for Eye Movement Contingent Display Control
In experimental studies of visual performance, the need often emerges to modify the stimulus according to the eye movements perfonncd by the subject. The methodology of Eye Movement-Contingent Display (EMCD) enables accurate control of the position and motion of the stimulus on the retina. EMCD procedures have been used successfully in many areas of vision science, including studies of visual attention, eye movements, and physiological characterization of neuronal response properties. Unfortunately, the difficulty of real-time programming and the unavailability of flexible and economical systems that can be easily adapted to the diversity of experimental needs and laboratory setups have prevented the widespread use of EMCD control. This paper describes EyeRIS, a general-purpose system for performing EMCD experiments on a Windows computer. Based on a digital signal processor with analog and digital interfaces, this integrated hardware and software system is responsible for sampling and processing oculomotor signals and subject responses and modifying the stimulus displayed on a CRT according to the gaze-contingent procedure specified by the experimenter. EyeRIS is designed to update the stimulus within a delay of 10 ms. To thoroughly evaluate EyeRIS' perforltlancc, this study (a) examines the response of the system in a number of EMCD procedures and computational benchmarking tests, (b) compares the accuracy of implementation of one particular EMCD procedure, retinal stabilization, to that produced by a standard tool used for this task, and (c) examines EyeRIS' performance in one of the many EMCD procedures that cannot be executed by means of any other currently available device.National Institute of Health (EY15732-01
Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision
To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision
Recommended from our members
A Smartphone-Based Tool for Rapid, Portable, and Automated Wide-Field Retinal Imaging.
Purpose:High-quality, wide-field retinal imaging is a valuable method for screening preventable, vision-threatening diseases of the retina. Smartphone-based retinal cameras hold promise for increasing access to retinal imaging, but variable image quality and restricted field of view can limit their utility. We developed and clinically tested a smartphone-based system that addresses these challenges with automation-assisted imaging. Methods:The system was designed to improve smartphone retinal imaging by combining automated fixation guidance, photomontage, and multicolored illumination with optimized optics, user-tested ergonomics, and touch-screen interface. System performance was evaluated from images of ophthalmic patients taken by nonophthalmic personnel. Two masked ophthalmologists evaluated images for abnormalities and disease severity. Results:The system automatically generated 100° retinal photomontages from five overlapping images in under 1 minute at full resolution (52.3 pixels per retinal degree) fully on-phone, revealing numerous retinal abnormalities. Feasibility of the system for diabetic retinopathy (DR) screening using the retinal photomontages was performed in 71 diabetics by masked graders. DR grade matched perfectly with dilated clinical examination in 55.1% of eyes and within 1 severity level for 85.2% of eyes. For referral-warranted DR, average sensitivity was 93.3% and specificity 56.8%. Conclusions:Automation-assisted imaging produced high-quality, wide-field retinal images that demonstrate the potential of smartphone-based retinal cameras to be used for retinal disease screening. Translational Relevance:Enhancement of smartphone-based retinal imaging through automation and software intelligence holds great promise for increasing the accessibility of retinal screening
Combined Learned and Classical Methods for Real-Time Visual Perception in Autonomous Driving
Autonomy, robotics, and Artificial Intelligence (AI) are among the main defining themes of next-generation societies. Of the most important applications of said technologies is driving automation which spans from different Advanced Driver Assistance Systems (ADAS) to full self-driving vehicles. Driving automation is promising to reduce accidents, increase safety, and increase access to mobility for more people such as the elderly and the handicapped. However, one of the main challenges facing autonomous vehicles is robust perception which can enable safe interaction and decision making. With so many sensors to perceive the environment, each with its own capabilities and limitations, vision is by far one of the main sensing modalities. Cameras are cheap and can provide rich information of the observed scene. Therefore, this dissertation develops a set of visual perception algorithms with a focus on autonomous driving as the target application area. This dissertation starts by addressing the problem of real-time motion estimation of an agent using only the visual input from a camera attached to it, a problem known as visual odometry. The visual odometry algorithm can achieve low drift rates over long-traveled distances. This is made possible through the innovative local mapping approach used. This visual odometry algorithm was then combined with my multi-object detection and tracking system. The tracking system operates in a tracking-by-detection paradigm where an object detector based on convolution neural networks (CNNs) is used. Therefore, the combined system can detect and track other traffic participants both in image domain and in 3D world frame while simultaneously estimating vehicle motion. This is a necessary requirement for obstacle avoidance and safe navigation. Finally, the operational range of traditional monocular cameras was expanded with the capability to infer depth and thus replace stereo and RGB-D cameras. This is accomplished through a single-stream convolution neural network which can output both depth prediction and semantic segmentation. Semantic segmentation is the process of classifying each pixel in an image and is an important step toward scene understanding. Literature survey, algorithms descriptions, and comprehensive evaluations on real-world datasets are presented.Ph.D.College of Engineering & Computer ScienceUniversity of Michiganhttps://deepblue.lib.umich.edu/bitstream/2027.42/153989/1/Mohamed Aladem Final Dissertation.pdfDescription of Mohamed Aladem Final Dissertation.pdf : Dissertatio
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
- …