142 research outputs found
Arbitrated address event representation digital image sensor
80×60 (1/8 VGA) address event imager in 0.6 μm CMOS converts light intensity into a one-bit code (a spike). The read-out of each spike is initiated by the pixel. The dynamic range is 200 dB for a pixel and 120 dB for the array. It uses 3.4 mW at a spike rate of 200 kHz. It is capable of 8.3 k effective frames/s
Hardware Implementation of a Visual-Motion Pixel Using Oriented Spatiotemporal Neural Filters
A pixel for measuring two-dimensional (2-D) visual motion with two one-dimensional (1-D) detectors has been implemented in very large scale integration. Based on the spatiotemporal feature extraction model of Adelson and Bergen, the pixel is realized using a general-purpose analog neural computer and a silicon retina. Because the neural computer only offers sum-and-threshold neurons, the Adelson and Bergen\u27s model is modified. The quadratic nonlinearity is replaced with a full-wave rectification, while the contrast normalization is replaced with edge detection and thresholding. Motion is extracted in two dimensions by using two 1-D detectors with spatial smoothing orthogonal to the direction of motion. Analysis shows that our pixel, although it has some limitations, has much lower hardware complexity compared to the full 2-D model. It also produces more accurate results and has a reduced aperture problem compared to the two 1-D model with no smoothing. Real-time velocity is represented as a distribution of activity of the 18 X and 18 Y velocity-tuned neural filter
Communication channel analysis and real time compressed sensing for high density neural recording devices
Next generation neural recording and Brain-
Machine Interface (BMI) devices call for high density or distributed
systems with more than 1000 recording sites. As the
recording site density grows, the device generates data on the
scale of several hundred megabits per second (Mbps). Transmitting
such large amounts of data induces significant power
consumption and heat dissipation for the implanted electronics.
Facing these constraints, efficient on-chip compression techniques
become essential to the reduction of implanted systems power
consumption. This paper analyzes the communication channel
constraints for high density neural recording devices. This paper
then quantifies the improvement on communication channel
using efficient on-chip compression methods. Finally, This paper
describes a Compressed Sensing (CS) based system that can
reduce the data rate by > 10x times while using power on
the order of a few hundred nW per recording channel
Pix2HDR -- A pixel-wise acquisition and deep learning-based synthesis approach for high-speed HDR videos
Accurately capturing dynamic scenes with wide-ranging motion and light
intensity is crucial for many vision applications. However, acquiring
high-speed high dynamic range (HDR) video is challenging because the camera's
frame rate restricts its dynamic range. Existing methods sacrifice speed to
acquire multi-exposure frames. Yet, misaligned motion in these frames can still
pose complications for HDR fusion algorithms, resulting in artifacts. Instead
of frame-based exposures, we sample the videos using individual pixels at
varying exposures and phase offsets. Implemented on a pixel-wise programmable
image sensor, our sampling pattern simultaneously captures fast motion at a
high dynamic range. We then transform pixel-wise outputs into an HDR video
using end-to-end learned weights from deep neural networks, achieving high
spatiotemporal resolution with minimized motion blurring. We demonstrate
aliasing-free HDR video acquisition at 1000 FPS, resolving fast motion under
low-light conditions and against bright backgrounds - both challenging
conditions for conventional cameras. By combining the versatility of pixel-wise
sampling patterns with the strength of deep neural networks at decoding complex
scenes, our method greatly enhances the vision system's adaptability and
performance in dynamic conditions.Comment: 14 pages, 14 figure
CPU-less robotics: distributed control of biomorphs
Traditional robotics revolves around the microprocessor. All well-known demonstrations of sensory guided motor control, such as jugglers and mobile robots, require at least one CPU. Recently, the availability of fast CPUs have made real-time sensory-motor control possible, however, problems with high power consumption and lack of autonomy still remain. In fact, the best examples of real-time robotics are usually tethered or require large batteries. We present a new paradigm for robotics control that uses no explicit CPU. We use computational sensors that are directly interfaced with adaptive actuation units. The units perform motor control and have learning capabilities. This architecture distributes computation over the entire body of the robot, in every sensor and actuator. Clearly, this is similar to biological sensory- motor systems. Some researchers have tried to model the latter in software, again using CPUs. We demonstrate this idea in with an adaptive locomotion controller chip. The locomotory controller for walking, running, swimming and flying animals is based on a Central Pattern Generator (CPG). CPGs are modeled as systems of coupled non-linear oscillators that control muscles responsible for movement. Here we describe an adaptive CPG model, implemented in a custom VLSI chip, which is used to control an under-actuated and asymmetric robotic leg
Two Transistor Current Mode Active Pixel Sensor
A novel current mode active pixel sensor for high resolution imaging is presented. The photo pixel is composed of a photodiode and two transistors: reset and transconductance amplifier transistor. The switch transistor is moved outside the pixel, allowing for lower pixel pitch and increased linearity of the output photocurrent. The increased linearity of the image sensor has greatly reduced spatial variations across the image after correlated double sampling and the column fix pattern noise is 0.35% of the saturated current. A discussion on theoretical temporal noise limitations of this design is also presented
Image Sensor with General Spatial Processing in a 3D Integrated Circuit Technology
An architectural overview of an image sensor with general spatial processing capabilities on the focal plane is presented. The system has been fabricated on two separate tiers, implemented on silicon-on-insulator technology with vertical interconnect capabilities. One tier is dedicated to imaging, where photosensitivity and pixel fill have been optimized. The subsequent layers contain noise suppression and digitally controlled analog processing elements, where general spatial filtering is computed. The digitally controlled aspect of the processing unit allows generic receptive fields to be computed on read out. The image is convolved with four receptive fields in parallel. The chip provides parallel readout of the filtered results and the intensity image
- …