5,403 research outputs found
Block-Matching Optical Flow for Dynamic Vision Sensor- Algorithm and FPGA Implementation
Rapid and low power computation of optical flow (OF) is potentially useful in
robotics. The dynamic vision sensor (DVS) event camera produces quick and
sparse output, and has high dynamic range, but conventional OF algorithms are
frame-based and cannot be directly used with event-based cameras. Previous DVS
OF methods do not work well with dense textured input and are designed for
implementation in logic circuits. This paper proposes a new block-matching
based DVS OF algorithm which is inspired by motion estimation methods used for
MPEG video compression. The algorithm was implemented both in software and on
FPGA. For each event, it computes the motion direction as one of 9 directions.
The speed of the motion is set by the sample interval. Results show that the
Average Angular Error can be improved by 30\% compared with previous methods.
The OF can be calculated on FPGA with 50\,MHz clock in 0.2\,us per event (11
clock cycles), 20 times faster than a Java software implementation running on a
desktop PC. Sample data is shown that the method works on scenes dominated by
edges, sparse features, and dense texture.Comment: Published in ISCAS 201
Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation
Neuromorphic engineering takes inspiration from biology to
solve engineering problems using the organizing principles of biological
neural computation. This field has demonstrated success in sensor based
applications (vision and audition) as well in cognition and actuators.
This paper is focused on mimicking an interesting functionality of the
retina that is computed by one type of Retinal Ganglion Cell (RGC).
It is the early detection of approaching (expanding) dark objects. This
paper presents the software and hardware logic FPGA implementation
of this approach sensitivity cell. It can be used in later cognition layers as
an attention mechanism. The input of this hardware modeled cell comes
from an asynchronous spiking Dynamic Vision Sensor, which leads to an
end-to-end event based processing system. The software model has been
developed in Java, and computed with an average processing time per
event of 370 ns on a NUC embedded computer. The output firing rate
for an approaching object depends on the cell parameters that represent
the needed number of input events to reach the firing threshold. For the
hardware implementation on a Spartan6 FPGA, the processing time is
reduced to 160 ns/event with the clock running at 50 MHz.Ministerio de Economía y Competitividad TEC2016-77785-PUnión Europea FP7-ICT-60095
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
In order to improve usability and safety, modern unmanned aerial vehicles
(UAVs) are equipped with sensors to monitor the environment, such as
laser-scanners and cameras. One important aspect in this monitoring process is
to detect obstacles in the flight path in order to avoid collisions. Since a
large number of consumer UAVs suffer from tight weight and power constraints,
our work focuses on obstacle avoidance based on a lightweight stereo camera
setup. We use disparity maps, which are computed from the camera images, to
locate obstacles and to automatically steer the UAV around them. For disparity
map computation we optimize the well-known semi-global matching (SGM) approach
for the deployment on an embedded FPGA. The disparity maps are then converted
into simpler representations, the so called U-/V-Maps, which are used for
obstacle detection. Obstacle avoidance is based on a reactive approach which
finds the shortest path around the obstacles as soon as they have a critical
distance to the UAV. One of the fundamental goals of our work was the reduction
of development costs by closing the gap between application development and
hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for
porting our algorithms, which are written in C/C++, to the embedded FPGA. We
evaluated our implementation of the disparity estimation on the KITTI Stereo
2015 benchmark. The integrity of the overall realtime reactive obstacle
avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in
conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Scienc
Bio-Inspired Stereo Vision Calibration for Dynamic Vision Sensors
Many advances have been made in the eld of computer vision. Several recent research trends
have focused on mimicking human vision by using a stereo vision system. In multi-camera systems, a
calibration process is usually implemented to improve the results accuracy. However, these systems generate
a large amount of data to be processed; therefore, a powerful computer is required and, in many cases,
this cannot be done in real time. Neuromorphic Engineering attempts to create bio-inspired systems that
mimic the information processing that takes place in the human brain. This information is encoded using
pulses (or spikes) and the generated systems are much simpler (in computational operations and resources),
which allows them to perform similar tasks with much lower power consumption, thus these processes
can be developed over specialized hardware with real-time processing. In this work, a bio-inspired stereovision
system is presented, where a calibration mechanism for this system is implemented and evaluated
using several tests. The result is a novel calibration technique for a neuromorphic stereo vision system,
implemented over specialized hardware (FPGA - Field-Programmable Gate Array), which allows obtaining
reduced latencies on hardware implementation for stand-alone systems, and working in real time.Ministerio de Economía y Competitividad TEC2016-77785-PMinisterio de Economía y Competitividad TIN2016-80644-
A sub-mW IoT-endnode for always-on visual monitoring and smart triggering
This work presents a fully-programmable Internet of Things (IoT) visual
sensing node that targets sub-mW power consumption in always-on monitoring
scenarios. The system features a spatial-contrast binary
pixel imager with focal-plane processing. The sensor, when working at its
lowest power mode ( at 10 fps), provides as output the number of
changed pixels. Based on this information, a dedicated camera interface,
implemented on a low-power FPGA, wakes up an ultra-low-power parallel
processing unit to extract context-aware visual information. We evaluate the
smart sensor on three always-on visual triggering application scenarios.
Triggering accuracy comparable to RGB image sensors is achieved at nominal
lighting conditions, while consuming an average power between and
, depending on context activity. The digital sub-system is extremely
flexible, thanks to a fully-programmable digital signal processing engine, but
still achieves 19x lower power consumption compared to MCU-based cameras with
significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
Efficient DMA transfers management on embedded Linux PSoC for Deep-Learning gestures recognition: Using Dynamic Vision Sensor and NullHop one-layer CNN accelerator to play RoShamBo
This demonstration shows a Dynamic Vision Sensor able
to capture visual motion at a speed equivalent to a highspeed
camera (20k fps). The collected visual information is presented as
normalized histogram to a CNN accelerator hardware, called
NullHop, that is able to process a pre-trained CNN to
play Roshambo against a human. The CNN designed for this
purpose consist of 5 convolutional layers and a fully connected
layer. The
latency for processing one histogram is 8ms. NullHop is deployed
on the FPGA fabric of a PSoC from Xilinx, the Zynq 7100, which
is based on a dual-core ARM computer and a Kintex-7 with 444K
logic cells, integrated in the same chip. ARM computer is running
Linux and a specific C++ controller is running the whole
demo. This controller runs at user space in order to extract the
maximum throughput thanks to an efficient use of the AXIStream,
based of
DMA transfers. This short delay needed to process one
visual histogram, allows us to average several consecutive
classification
outputs. Therefore, it provides the best estimation of the symbol
that the user presents to the visual sensor. This output is then
mapped to present the winner symbol within the 60ms latency
that the brain considers acceptable before thinking that there is a
trick.Ministerio de Economía y Competitividad TEC2016-77785-
- …