20,972 research outputs found
Evaluating Spiking Neural Network On Neuromorphic Platform For Human Activity Recognition
Energy efficiency and low latency are crucial requirements for designing
wearable AI-empowered human activity recognition systems, due to the hard
constraints of battery operations and closed-loop feedback. While neural
network models have been extensively compressed to match the stringent edge
requirements, spiking neural networks and event-based sensing are recently
emerging as promising solutions to further improve performance due to their
inherent energy efficiency and capacity to process spatiotemporal data in very
low latency. This work aims to evaluate the effectiveness of spiking neural
networks on neuromorphic processors in human activity recognition for wearable
applications. The case of workout recognition with wrist-worn wearable motion
sensors is used as a study. A multi-threshold delta modulation approach is
utilized for encoding the input sensor data into spike trains to move the
pipeline into the event-based approach. The spikes trains are then fed to a
spiking neural network with direct-event training, and the trained model is
deployed on the research neuromorphic platform from Intel, Loihi, to evaluate
energy and latency efficiency. Test results show that the spike-based workouts
recognition system can achieve a comparable accuracy (87.5\%) comparable to the
popular milliwatt RISC-V bases multi-core processor GAP8 with a traditional
neural network ( 88.1\%) while achieving two times better energy-delay product
(0.66 \si{\micro\joule\second} vs. 1.32 \si{\micro\joule\second})
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing
application problems, but their excessive resource demand slows down execution
time, pausing a serious impediment to deployment on low-end devices. To address
this challenge, recent literature focused on compressing neural network size to
improve performance. We show that changing neural network size does not
proportionally affect performance attributes of interest, such as execution
time. Rather, extreme run-time nonlinearities exist over the network
configuration space. Hence, we propose a novel framework, called FastDeepIoT,
that uncovers the non-linear relation between neural network structure and
execution time, then exploits that understanding to find network configurations
that significantly improve the trade-off between execution time and accuracy on
mobile and embedded devices. FastDeepIoT makes two key contributions. First,
FastDeepIoT automatically learns an accurate and highly interpretable execution
time model for deep neural networks on the target device. This is done without
prior knowledge of either the hardware specifications or the detailed
implementation of the used deep learning library. Second, FastDeepIoT informs a
compression algorithm how to minimize execution time on the profiled device
without impacting accuracy. We evaluate FastDeepIoT using three different
sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.
FastDeepIoT further reduces the neural network execution time by to
and energy consumption by to compared with the
state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks
Advanced video classification systems decode video frames to derive the
necessary texture and motion representations for ingestion and analysis by
spatio-temporal deep convolutional neural networks (CNNs). However, when
considering visual Internet-of-Things applications, surveillance systems and
semantic crawlers of large video repositories, the video capture and the
CNN-based semantic analysis parts do not tend to be co-located. This
necessitates the transport of compressed video over networks and incurs
significant overhead in bandwidth and energy consumption, thereby significantly
undermining the deployment potential of such systems. In this paper, we
investigate the trade-off between the encoding bitrate and the achievable
accuracy of CNN-based video classification models that directly ingest
AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video
bitstreams and applying complex optical flow calculations prior to CNN
processing, we only retain motion vector and select texture information at
significantly-reduced bitrates and apply no additional processing prior to CNN
ingestion. Based on three CNN architectures and two action recognition
datasets, we achieve 11%-94% saving in bitrate with marginal effect on
classification accuracy. A model-based selection between multiple CNNs
increases these savings further, to the point where, if up to 7% loss of
accuracy can be tolerated, video classification can take place with as little
as 3 kbps for the transport of the required compressed video information to the
system implementing the CNN models
- …