67,881 research outputs found
A biologically inspired spiking model of visual processing for image feature detection
To enable fast reliable feature matching or tracking in scenes, features need to be discrete and meaningful, and hence edge or corner features, commonly called interest points are often used for this purpose. Experimental research has illustrated that biological vision systems use neuronal circuits to extract particular features such as edges or corners from visual scenes. Inspired by this biological behaviour, this paper proposes a biologically inspired spiking neural network for the purpose of image feature extraction. Standard digital images are processed and converted to spikes in a manner similar to the processing that transforms light into spikes in the retina. Using a hierarchical spiking network, various types of biologically inspired receptive fields are used to extract progressively complex image features. The performance of the network is assessed by examining the repeatability of extracted features with visual results presented using both synthetic and real images
Symmetry sensitivities of Derivative-of-Gaussian filters
We consider the measurement of image structure using linear filters, in particular derivative-of-Gaussian (DtG) filters, which are an important model of V1 simple cells and widely used in computer vision, and whether such measurements can determine local image symmetry. We show that even a single linear filter can be sensitive to a symmetry, in the sense that specific responses of the filter can rule it out. We state and prove a necessary and sufficient, readily computable, criterion for filter symmetry-sensitivity. We use it to show that the six filters in a second order DtG family have patterns of joint sensitivity which are distinct for 12 different classes of symmetry. This rich symmetry-sensitivity adds to the properties that make DtG filters well-suited for probing local image structure, and provides a set of landmark responses suitable to be the foundation of a nonarbitrary system of feature categories
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
A survey of visual preprocessing and shape representation techniques
Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)
The secret world of shrimps: polarisation vision at its best
Animal vision spans a great range of complexity, with systems evolving to
detect variations in optical intensity, distribution, colour, and polarisation.
Polarisation vision systems studied to date detect one to four channels of
linear polarisation, combining them in opponent pairs to provide
intensity-independent operation. Circular polarisation vision has never been
seen, and is widely believed to play no part in animal vision. Polarisation is
fully measured via Stokes' parameters--obtained by combined linear and circular
polarisation measurements. Optimal polarisation vision is the ability to see
Stokes' parameters: here we show that the crustacean \emph{Gonodactylus
smithii} measures the exact components required. This vision provides optimal
contrast-enhancement, and precise determination of polarisation with no
confusion-states or neutral-points--significant advantages. We emphasise that
linear and circular polarisation vision are not different modalities--both are
necessary for optimal polarisation vision, regardless of the presence of
strongly linear or circularly polarised features in the animal's environment.Comment: 10 pages, 6 figures, 2 table
- …