97 research outputs found
A sub-mW IoT-endnode for always-on visual monitoring and smart triggering
This work presents a fully-programmable Internet of Things (IoT) visual
sensing node that targets sub-mW power consumption in always-on monitoring
scenarios. The system features a spatial-contrast binary
pixel imager with focal-plane processing. The sensor, when working at its
lowest power mode ( at 10 fps), provides as output the number of
changed pixels. Based on this information, a dedicated camera interface,
implemented on a low-power FPGA, wakes up an ultra-low-power parallel
processing unit to extract context-aware visual information. We evaluate the
smart sensor on three always-on visual triggering application scenarios.
Triggering accuracy comparable to RGB image sensors is achieved at nominal
lighting conditions, while consuming an average power between and
, depending on context activity. The digital sub-system is extremely
flexible, thanks to a fully-programmable digital signal processing engine, but
still achieves 19x lower power consumption compared to MCU-based cameras with
significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
Ultra-Low Power IoT Smart Visual Sensing Devices for Always-ON Applications
This work presents the design of a Smart Ultra-Low Power visual sensor architecture that couples together an ultra-low power event-based image sensor with a parallel and power-optimized digital architecture for data processing. By means of mixed-signal circuits, the imager generates a stream of address events after the extraction and binarization of spatial gradients.
When targeting monitoring applications, the sensing and processing energy costs can be reduced by two orders of magnitude thanks to either the mixed-signal imaging technology, the event-based data compression and the use of event-driven computing approaches.
From a system-level point of view, a context-aware power management scheme is enabled by means of a power-optimized sensor peripheral block, that requests the processor activation only when a relevant information is detected within the focal plane of the imager. When targeting a smart visual node for triggering purpose, the event-driven approach brings a 10x power reduction with respect to other presented visual systems, while leading to comparable results in terms of detection accuracy. To further enhance the recognition capabilities of the smart camera system, this work introduces the concept of event-based binarized neural networks. By coupling together the theory of binarized neural networks and focal-plane processing, a 17.8% energy reduction is demonstrated on a real-world data classification with a performance drop of 3% with respect to a baseline system featuring commercial visual sensors and a Binary Neural Network engine. Moreover, if coupling the BNN engine with the event-driven triggering detection flow, the average power consumption can be as low as the sleep power of 0.3mW in case of infrequent events, which is 8x lower than a smart camera system featuring a commercial RGB imager
ColibriUAV: An Ultra-Fast, Energy-Efficient Neuromorphic Edge Processing UAV-Platform with Event-Based and Frame-Based Cameras
The interest in dynamic vision sensor (DVS)-powered unmanned aerial vehicles
(UAV) is raising, especially due to the microsecond-level reaction time of the
bio-inspired event sensor, which increases robustness and reduces latency of
the perception tasks compared to a RGB camera. This work presents ColibriUAV, a
UAV platform with both frame-based and event-based cameras interfaces for
efficient perception and near-sensor processing. The proposed platform is
designed around Kraken, a novel low-power RISC-V System on Chip with two
hardware accelerators targeting spiking neural networks and deep ternary neural
networks.Kraken is capable of efficiently processing both event data from a DVS
camera and frame data from an RGB camera. A key feature of Kraken is its
integrated, dedicated interface with a DVS camera. This paper benchmarks the
end-to-end latency and power efficiency of the neuromorphic and event-based UAV
subsystem, demonstrating state-of-the-art event data with a throughput of 7200
frames of events per second and a power consumption of 10.7 \si{\milli\watt},
which is over 6.6 times faster and a hundred times less power-consuming than
the widely-used data reading approach through the USB interface. The overall
sensing and processing power consumption is below 50 mW, achieving latency in
the milliseconds range, making the platform suitable for low-latency autonomous
nano-drones as well
Memory and information processing in neuromorphic systems
A striking difference between brain-inspired neuromorphic processors and
current von Neumann processors architectures is the way in which memory and
processing is organized. As Information and Communication Technologies continue
to address the need for increased computational power through the increase of
cores within a digital processor, neuromorphic engineers and scientists can
complement this need by building processor architectures where memory is
distributed with the processing. In this paper we present a survey of
brain-inspired processor architectures that support models of cortical networks
and deep neural networks. These architectures range from serial clocked
implementations of multi-neuron systems to massively parallel asynchronous ones
and from purely digital systems to mixed analog/digital systems which implement
more biological-like models of neurons and synapses together with a suite of
adaptation and learning mechanisms analogous to the ones found in biological
nervous systems. We describe the advantages of the different approaches being
pursued and present the challenges that need to be addressed for building
artificial neural processing systems that can display the richness of behaviors
seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed
neuromorphic computing platforms and system
A Construction Kit for Efficient Low Power Neural Network Accelerator Designs
Implementing embedded neural network processing at the edge requires
efficient hardware acceleration that couples high computational performance
with low power consumption. Driven by the rapid evolution of network
architectures and their algorithmic features, accelerator designs are
constantly updated and improved. To evaluate and compare hardware design
choices, designers can refer to a myriad of accelerator implementations in the
literature. Surveys provide an overview of these works but are often limited to
system-level and benchmark-specific performance metrics, making it difficult to
quantitatively compare the individual effect of each utilized optimization
technique. This complicates the evaluation of optimizations for new accelerator
designs, slowing-down the research progress. This work provides a survey of
neural network accelerator optimization approaches that have been used in
recent works and reports their individual effects on edge processing
performance. It presents the list of optimizations and their quantitative
effects as a construction kit, allowing to assess the design choices for each
building block separately. Reported optimizations range from up to 10'000x
memory savings to 33x energy reductions, providing chip designers an overview
of design choices for implementing efficient low power neural network
accelerators
Neural Network Methods for Radiation Detectors and Imaging
Recent advances in image data processing through machine learning and
especially deep neural networks (DNNs) allow for new optimization and
performance-enhancement schemes for radiation detectors and imaging hardware
through data-endowed artificial intelligence. We give an overview of data
generation at photon sources, deep learning-based methods for image processing
tasks, and hardware solutions for deep learning acceleration. Most existing
deep learning approaches are trained offline, typically using large amounts of
computational resources. However, once trained, DNNs can achieve fast inference
speeds and can be deployed to edge devices. A new trend is edge computing with
less energy consumption (hundreds of watts or less) and real-time analysis
potential. While popularly used for edge computing, electronic-based hardware
accelerators ranging from general purpose processors such as central processing
units (CPUs) to application-specific integrated circuits (ASICs) are constantly
reaching performance limits in latency, energy consumption, and other physical
constraints. These limits give rise to next-generation analog neuromorhpic
hardware platforms, such as optical neural networks (ONNs), for high parallel,
low latency, and low energy computing to boost deep learning acceleration
Biological Vision Inspired Systems in Biomedical Applications
This Master of Philosophy thesis presents two potential biomedical applications of an event-based camera, also known as a neuromorphic vision system (camera) or silicon retina vision sensor. Event-based cameras have drawn significant interest due to their advantages over traditional cameras, including low latency, high data throughput, high dynamic range, and low power consumption. Hence endless research is actively seeking for potential applications of event-based cameras.
Flow cytometry, a highly effective technology renowned for its rapid analysis of cells or particles suspended in a solution, has been extensively utilized across diverse disciplines. These include immunology, virology, molecular biology, cancer biology, and infectious disease monitoring. Conventional imaging flow cytometers generally suffer from motion blur, low dynamic range, and trade-offs between the frame rate (speed) and image resolution. In this thesis, we conducted a feasibility study with algorithmic results to propose an event-based high-throughput flow cytometer.
Navigation devices that demonstrate the capability of guiding blind or vision-impaired people have always remained a challenge over the past decade. The reasons for this could be because of the limited data throughput, undesirable user feedback, and the requirement for power consumption. Hence we here propose a proof-of-concept blind navigation system with an event-based camera
- …