1,761 research outputs found
Compact recurrent neural networks for acoustic event detection on low-energy low-complexity platforms
Outdoor acoustic events detection is an exciting research field but
challenged by the need for complex algorithms and deep learning techniques,
typically requiring many computational, memory, and energy resources. This
challenge discourages IoT implementation, where an efficient use of resources
is required. However, current embedded technologies and microcontrollers have
increased their capabilities without penalizing energy efficiency. This paper
addresses the application of sound event detection at the edge, by optimizing
deep learning techniques on resource-constrained embedded platforms for the
IoT. The contribution is two-fold: firstly, a two-stage student-teacher
approach is presented to make state-of-the-art neural networks for sound event
detection fit on current microcontrollers; secondly, we test our approach on an
ARM Cortex M4, particularly focusing on issues related to 8-bits quantization.
Our embedded implementation can achieve 68% accuracy in recognition on
Urbansound8k, not far from state-of-the-art performance, with an inference time
of 125 ms for each second of the audio stream, and power consumption of 5.5 mW
in just 34.3 kB of RAM
Sound Event Detection with Binary Neural Networks on Tightly Power-Constrained IoT Devices
Sound event detection (SED) is a hot topic in consumer and smart city
applications. Existing approaches based on Deep Neural Networks are very
effective, but highly demanding in terms of memory, power, and throughput when
targeting ultra-low power always-on devices.
Latency, availability, cost, and privacy requirements are pushing recent IoT
systems to process the data on the node, close to the sensor, with a very
limited energy supply, and tight constraints on the memory size and processing
capabilities precluding to run state-of-the-art DNNs.
In this paper, we explore the combination of extreme quantization to a
small-footprint binary neural network (BNN) with the highly energy-efficient,
RISC-V-based (8+1)-core GAP8 microcontroller. Starting from an existing CNN for
SED whose footprint (815 kB) exceeds the 512 kB of memory available on our
platform, we retrain the network using binary filters and activations to match
these memory constraints. (Fully) binary neural networks come with a natural
drop in accuracy of 12-18% on the challenging ImageNet object recognition
challenge compared to their equivalent full-precision baselines. This BNN
reaches a 77.9% accuracy, just 7% lower than the full-precision version, with
58 kB (7.2 times less) for the weights and 262 kB (2.4 times less) memory in
total. With our BNN implementation, we reach a peak throughput of 4.6 GMAC/s
and 1.5 GMAC/s over the full network, including preprocessing with Mel bins,
which corresponds to an efficiency of 67.1 GMAC/s/W and 31.3 GMAC/s/W,
respectively. Compared to the performance of an ARM Cortex-M4 implementation,
our system has a 10.3 times faster execution time and a 51.1 times higher
energy-efficiency.Comment: 6 pages conferenc
The model of an anomaly detector for HiLumi LHC magnets based on Recurrent Neural Networks and adaptive quantization
This paper focuses on an examination of an applicability of Recurrent Neural
Network models for detecting anomalous behavior of the CERN superconducting
magnets. In order to conduct the experiments, the authors designed and
implemented an adaptive signal quantization algorithm and a custom GRU-based
detector and developed a method for the detector parameters selection. Three
different datasets were used for testing the detector. Two artificially
generated datasets were used to assess the raw performance of the system
whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets
was intended for real-life experiments and model training. Several different
setups of the developed anomaly detection system were evaluated and compared
with state-of-the-art OC-SVM reference model operating on the same data. The
OC-SVM model was equipped with a rich set of feature extractors accounting for
a range of the input signal properties. It was determined in the course of the
experiments that the detector, along with its supporting design methodology,
reaches F1 equal or very close to 1 for almost all test sets. Due to the
profile of the data, the best_length setup of the detector turned out to
perform the best among all five tested configuration schemes of the detection
system. The quantization parameters have the biggest impact on the overall
performance of the detector with the best values of input/output grid equal to
16 and 8, respectively. The proposed solution of the detection significantly
outperformed OC-SVM-based detector in most of the cases, with much more stable
performance across all the datasets.Comment: Related to arXiv:1702.0083
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
Deep Learning for Mobile Multimedia: A Survey
Deep Learning (DL) has become a crucial technology for multimedia computing. It offers a powerful instrument to automatically produce high-level abstractions of complex multimedia data, which can be exploited in a number of applications, including object detection and recognition, speech-to- text, media retrieval, multimodal data analysis, and so on. The availability of affordable large-scale parallel processing architectures, and the sharing of effective open-source codes implementing the basic learning algorithms, caused a rapid diffusion of DL methodologies, bringing a number of new technologies and applications that outperform, in most cases, traditional machine learning technologies. In recent years, the possibility of implementing DL technologies on mobile devices has attracted significant attention. Thanks to this technology, portable devices may become smart objects capable of learning and acting. The path toward these exciting future scenarios, however, entangles a number of important research challenges. DL architectures and algorithms are hardly adapted to the storage and computation resources of a mobile device. Therefore, there is a need for new generations of mobile processors and chipsets, small footprint learning and inference algorithms, new models of collaborative and distributed processing, and a number of other fundamental building blocks. This survey reports the state of the art in this exciting research area, looking back to the evolution of neural networks, and arriving to the most recent results in terms of methodologies, technologies, and applications for mobile environments
Computational Imaging and Artificial Intelligence: The Next Revolution of Mobile Vision
Signal capture stands in the forefront to perceive and understand the
environment and thus imaging plays the pivotal role in mobile vision. Recent
explosive progresses in Artificial Intelligence (AI) have shown great potential
to develop advanced mobile platforms with new imaging devices. Traditional
imaging systems based on the "capturing images first and processing afterwards"
mechanism cannot meet this unprecedented demand. Differently, Computational
Imaging (CI) systems are designed to capture high-dimensional data in an
encoded manner to provide more information for mobile vision systems.Thanks to
AI, CI can now be used in real systems by integrating deep learning algorithms
into the mobile vision platform to achieve the closed loop of intelligent
acquisition, processing and decision making, thus leading to the next
revolution of mobile vision.Starting from the history of mobile vision using
digital cameras, this work first introduces the advances of CI in diverse
applications and then conducts a comprehensive review of current research
topics combining CI and AI. Motivated by the fact that most existing studies
only loosely connect CI and AI (usually using AI to improve the performance of
CI and only limited works have deeply connected them), in this work, we propose
a framework to deeply integrate CI and AI by using the example of self-driving
vehicles with high-speed communication, edge computing and traffic planning.
Finally, we outlook the future of CI plus AI by investigating new materials,
brain science and new computing techniques to shed light on new directions of
mobile vision systems
Tiny Deep Learning Architectures Enabling Sensor-Near Acoustic Data Processing and Defect Localization
The timely diagnosis of defects at their incipient stage of formation is crucial to extending the life-cycle of technical appliances. This is the case of mechanical-related stress, either due to long aging degradation processes (e.g., corrosion) or in-operation forces (e.g., impact events), which might provoke detrimental damage, such as cracks, disbonding or delaminations, most commonly followed by the release of acoustic energy. The localization of these sources can be successfully fulfilled via adoption of acoustic emission (AE)-based inspection techniques through the computation of the time of arrival (ToA), namely the time at which the induced mechanical wave released at the occurrence of the acoustic event arrives to the acquisition unit. However, the accurate estimation of the ToA may be hampered by poor signal-to-noise ratios (SNRs). In these conditions, standard statistical methods typically fail. In this work, two alternative deep learning methods are proposed for ToA retrieval in processing AE signals, namely a dilated convolutional neural network (DilCNN) and a capsule neural network for ToA (CapsToA). These methods have the additional benefit of being portable on resource-constrained microprocessors. Their performance has been extensively studied on both synthetic and experimental data, focusing on the problem of ToA identification for the case of a metallic plate. Results show that the two methods can achieve localization errors which are up to 70% more precise than those yielded by conventional strategies, even when the SNR is severely compromised (i.e., down to 2 dB). Moreover, DilCNN and CapsNet have been implemented in a tiny machine learning environment and then deployed on microcontroller units, showing a negligible loss of performance with respect to offline realizations
- …