4,197 research outputs found

    A micropower centroiding vision processor

    Get PDF
    Published versio

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Object oriented image segmentation on the CNNUC3 chip

    Get PDF
    We show how a complex object oriented image analysis algorithm can be implemented on a CNNUM chip for video-coding. Besides the applied linear operations, several gray-scale nonlinear template operations are also emulated using algorithmic solutions.Office of Naval Research (USA) NICOP N68171-98-C-9004European Commission DICTAM IST-1999-19007, TIC 99082

    Toward color image segmentation in analog VLSI: Algorithm and hardware

    Get PDF
    Standard techniques for segmenting color images are based on finding normalized RGB discontinuities, color histogramming, or clustering techniques in RGB or CIE color spaces. The use of the psychophysical variable hue in HSI space has not been popular due to its numerical instability at low saturations. In this article, we propose the use of a simplified hue description suitable for implementation in analog VLSI. We demonstrate that if theintegrated white condition holds, hue is invariant to certain types of highlights, shading, and shadows. This is due to theadditive/shift invariance property, a property that other color variables lack. The more restrictive uniformly varying lighting model associated with themultiplicative/scale invariance property shared by both hue and normalized RGB allows invariance to transparencies, and to simple models of shading and shadows. Using binary hue discontinuities in conjunction with first-order type of surface interpolation, we demonstrate these invariant properties and compare them against the performance of RGB, normalized RGB, and CIE color spaces. We argue that working in HSI space offers an effective method for segmenting scenes in the presence of confounding cues due to shading, transparency, highlights, and shadows. Based on this work, we designed and fabricated for the first time an analog CMOS VLSI circuit with on-board phototransistor input that computes normalized color and hue

    Analog hardware for detecting discontinuities in early vision

    Get PDF
    The detection of discontinuities in motion, intensity, color, and depth is a well-studied but difficult problem in computer vision [6]. We discuss the first hardware circuit that explicitly implements either analog or binary line processes in a deterministic fashion. Specifically, we show that the processes of smoothing (using a first-order or membrane type of stabilizer) and of segmentation can be implemented by a single, two-terminal nonlinear voltage-controlled resistor, the “resistive fuse”; and we derive its current-voltage relationship from a number of deterministic approximations to the underlying stochastic Markov random fields algorthms. The concept that the quadratic variation functionals of early vision can be solved via linear resistive networks minimizing power dissipation [37] can be extended to non-convex variational functionals with analog or binary line processes being solved by nonlinear resistive networks minimizing the electrical co-content. We have successfully designed, tested, and demonstrated an analog CMOS VLSI circuit that contains a 1D resistive network of fuses implementing piecewise smooth surface interpolation. We furthermore demonstrate the segmenting abilities of these analog and deterministic “line processes” by numerically simulating the nonlinear resistive network computing optical flow in the presence of motion discontinuities. Finally, we discuss various circuit implementations of the optical flow computation using these circuits

    Continuous-time segmentation networks

    Get PDF
    Segmentation is a basic problem in computer vision. The tiny-tanh network, a continuous-time network that segments scenes based upon intensity, motion, or depth is introduced. The tiny- tanh algorithm maps naturally to analog circuitry since it was inspired by previous experiments with analog VLSI segmentation hardware. A convex Lyapunov energy is utilized so that the system does not get stuck in local minima. No annealing algorithms of any kind are necessary- -a sharp contrast to previous software/hardware solutions of this problem
    corecore