518 research outputs found
Deep Cellular Recurrent Neural Architecture for Efficient Multidimensional Time-Series Data Processing
Efficient processing of time series data is a fundamental yet challenging problem in pattern recognition. Though recent developments in machine learning and deep learning have enabled remarkable improvements in processing large scale datasets in many application domains, most are designed and regulated to handle inputs that are static in time. Many real-world data, such as in biomedical, surveillance and security, financial, manufacturing and engineering applications, are rarely static in time, and demand models able to recognize patterns in both space and time. Current machine learning (ML) and deep learning (DL) models adapted for time series processing tend to grow in complexity and size to accommodate the additional dimensionality of time. Specifically, the biologically inspired learning based models known as artificial neural networks that have shown extraordinary success in pattern recognition, tend to grow prohibitively large and cumbersome in the presence of large scale multi-dimensional time series biomedical data such as EEG.
Consequently, this work aims to develop representative ML and DL models for robust and efficient large scale time series processing. First, we design a novel ML pipeline with efficient feature engineering to process a large scale multi-channel scalp EEG dataset for automated detection of epileptic seizures. With the use of a sophisticated yet computationally efficient time-frequency analysis technique known as harmonic wavelet packet transform and an efficient self-similarity computation based on fractal dimension, we achieve state-of-the-art performance for automated seizure detection in EEG data. Subsequently, we investigate the development of a novel efficient deep recurrent learning model for large scale time series processing. For this, we first study the functionality and training of a biologically inspired neural network architecture known as cellular simultaneous recurrent neural network (CSRN). We obtain a generalization of this network for multiple topological image processing tasks and investigate the learning efficacy of the complex cellular architecture using several state-of-the-art training methods. Finally, we develop a novel deep cellular recurrent neural network (CDRNN) architecture based on the biologically inspired distributed processing used in CSRN for processing time series data. The proposed DCRNN leverages the cellular recurrent architecture to promote extensive weight sharing and efficient, individualized, synchronous processing of multi-source time series data. Experiments on a large scale multi-channel scalp EEG, and a machine fault detection dataset show that the proposed DCRNN offers state-of-the-art recognition performance while using substantially fewer trainable recurrent units
Simulation and implementation of novel deep learning hardware architectures for resource constrained devices
Corey Lammie designed mixed signal memristive-complementary metal–oxide–semiconductor (CMOS) and field programmable gate arrays (FPGA) hardware architectures, which were used to reduce the power and resource requirements of Deep Learning (DL) systems; both during inference and training. Disruptive design methodologies, such as those explored in this thesis, can be used to facilitate the design of next-generation DL systems
Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications
With the advent of dedicated Deep Learning (DL) accelerators and neuromorphic
processors, new opportunities are emerging for applying deep and Spiking Neural
Network (SNN) algorithms to healthcare and biomedical applications at the edge.
This can facilitate the advancement of the medical Internet of Things (IoT)
systems and Point of Care (PoC) devices. In this paper, we provide a tutorial
describing how various technologies ranging from emerging memristive devices,
to established Field Programmable Gate Arrays (FPGAs), and mature Complementary
Metal Oxide Semiconductor (CMOS) technology can be used to develop efficient DL
accelerators to solve a wide variety of diagnostic, pattern recognition, and
signal processing problems in healthcare. Furthermore, we explore how spiking
neuromorphic processors can complement their DL counterparts for processing
biomedical signals. After providing the required background, we unify the
sparsely distributed research on neural network and neuromorphic hardware
implementations as applied to the healthcare domain. In addition, we benchmark
various hardware platforms by performing a biomedical electromyography (EMG)
signal processing task and drawing comparisons among them in terms of inference
delay and energy. Finally, we provide our analysis of the field and share a
perspective on the advantages, disadvantages, challenges, and opportunities
that different accelerators and neuromorphic processors introduce to healthcare
and biomedical domains. This paper can serve a large audience, ranging from
nanoelectronics researchers, to biomedical and healthcare practitioners in
grasping the fundamental interplay between hardware, algorithms, and clinical
adoption of these tools, as we shed light on the future of deep networks and
spiking neuromorphic processing systems as proponents for driving biomedical
circuits and systems forward.Comment: Submitted to IEEE Transactions on Biomedical Circuits and Systems (21
pages, 10 figures, 5 tables
Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey
The Internet of Underwater Things (IoUT) is an emerging communication
ecosystem developed for connecting underwater objects in maritime and
underwater environments. The IoUT technology is intricately linked with
intelligent boats and ships, smart shores and oceans, automatic marine
transportations, positioning and navigation, underwater exploration, disaster
prediction and prevention, as well as with intelligent monitoring and security.
The IoUT has an influence at various scales ranging from a small scientific
observatory, to a midsized harbor, and to covering global oceanic trade. The
network architecture of IoUT is intrinsically heterogeneous and should be
sufficiently resilient to operate in harsh environments. This creates major
challenges in terms of underwater communications, whilst relying on limited
energy resources. Additionally, the volume, velocity, and variety of data
produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise
to the concept of Big Marine Data (BMD), which has its own processing
challenges. Hence, conventional data processing techniques will falter, and
bespoke Machine Learning (ML) solutions have to be employed for automatically
learning the specific BMD behavior and features facilitating knowledge
extraction and decision support. The motivation of this paper is to
comprehensively survey the IoUT, BMD, and their synthesis. It also aims for
exploring the nexus of BMD with ML. We set out from underwater data collection
and then discuss the family of IoUT data communication techniques with an
emphasis on the state-of-the-art research challenges. We then review the suite
of ML solutions suitable for BMD handling and analytics. We treat the subject
deductively from an educational perspective, critically appraising the material
surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys &
Tutorials, peer-reviewed academic journa
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing
application problems, but their excessive resource demand slows down execution
time, pausing a serious impediment to deployment on low-end devices. To address
this challenge, recent literature focused on compressing neural network size to
improve performance. We show that changing neural network size does not
proportionally affect performance attributes of interest, such as execution
time. Rather, extreme run-time nonlinearities exist over the network
configuration space. Hence, we propose a novel framework, called FastDeepIoT,
that uncovers the non-linear relation between neural network structure and
execution time, then exploits that understanding to find network configurations
that significantly improve the trade-off between execution time and accuracy on
mobile and embedded devices. FastDeepIoT makes two key contributions. First,
FastDeepIoT automatically learns an accurate and highly interpretable execution
time model for deep neural networks on the target device. This is done without
prior knowledge of either the hardware specifications or the detailed
implementation of the used deep learning library. Second, FastDeepIoT informs a
compression algorithm how to minimize execution time on the profiled device
without impacting accuracy. We evaluate FastDeepIoT using three different
sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.
FastDeepIoT further reduces the neural network execution time by to
and energy consumption by to compared with the
state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
Towards Efficient Communications in Federated Learning: A Contemporary Survey
In the traditional distributed machine learning scenario, the user's private
data is transmitted between nodes and a central server, which results in great
potential privacy risks. In order to balance the issues of data privacy and
joint training of models, federated learning (FL) is proposed as a special
distributed machine learning with a privacy protection mechanism, which can
realize multi-party collaborative computing without revealing the original
data. However, in practice, FL faces many challenging communication problems.
This review aims to clarify the relationship between these communication
problems, and focus on systematically analyzing the research progress of FL
communication work from three perspectives: communication efficiency,
communication environment, and communication resource allocation. Firstly, we
sort out the current challenges existing in the communications of FL. Secondly,
we have compiled articles related to FL communications, and then describe the
development trend of the entire field guided by the logical relationship
between them. Finally, we point out the future research directions for
communications in FL
- …