9,026 research outputs found
Deep Learning-Based Dynamic Watermarking for Secure Signal Authentication in the Internet of Things
Securing the Internet of Things (IoT) is a necessary milestone toward
expediting the deployment of its applications and services. In particular, the
functionality of the IoT devices is extremely dependent on the reliability of
their message transmission. Cyber attacks such as data injection,
eavesdropping, and man-in-the-middle threats can lead to security challenges.
Securing IoT devices against such attacks requires accounting for their
stringent computational power and need for low-latency operations. In this
paper, a novel deep learning method is proposed for dynamic watermarking of IoT
signals to detect cyber attacks. The proposed learning framework, based on a
long short-term memory (LSTM) structure, enables the IoT devices to extract a
set of stochastic features from their generated signal and dynamically
watermark these features into the signal. This method enables the IoT's cloud
center, which collects signals from the IoT devices, to effectively
authenticate the reliability of the signals. Furthermore, the proposed method
prevents complicated attack scenarios such as eavesdropping in which the cyber
attacker collects the data from the IoT devices and aims to break the
watermarking algorithm. Simulation results show that, with an attack detection
delay of under 1 second the messages can be transmitted from IoT devices with
an almost 100% reliability.Comment: 6 pages, 9 figure
Neonatal Seizure Detection using Convolutional Neural Networks
This study presents a novel end-to-end architecture that learns hierarchical
representations from raw EEG data using fully convolutional deep neural
networks for the task of neonatal seizure detection. The deep neural network
acts as both feature extractor and classifier, allowing for end-to-end
optimization of the seizure detector. The designed system is evaluated on a
large dataset of continuous unedited multi-channel neonatal EEG totaling 835
hours and comprising of 1389 seizures. The proposed deep architecture, with
sample-level filters, achieves an accuracy that is comparable to the
state-of-the-art SVM-based neonatal seizure detector, which operates on a set
of carefully designed hand-crafted features. The fully convolutional
architecture allows for the localization of EEG waveforms and patterns that
result in high seizure probabilities for further clinical examination.Comment: IEEE International Workshop on Machine Learning for Signal Processin
Deep Cytometry: Deep learning with Real-time Inference in Cell Sorting and Flow Cytometry
Deep learning has achieved spectacular performance in image and speech
recognition and synthesis. It outperforms other machine learning algorithms in
problems where large amounts of data are available. In the area of measurement
technology, instruments based on the photonic time stretch have established
record real-time measurement throughput in spectroscopy, optical coherence
tomography, and imaging flow cytometry. These extreme-throughput instruments
generate approximately 1 Tbit/s of continuous measurement data and have led to
the discovery of rare phenomena in nonlinear and complex systems as well as new
types of biomedical instruments. Owing to the abundance of data they generate,
time-stretch instruments are a natural fit to deep learning classification.
Previously we had shown that high-throughput label-free cell classification
with high accuracy can be achieved through a combination of time-stretch
microscopy, image processing and feature extraction, followed by deep learning
for finding cancer cells in the blood. Such a technology holds promise for
early detection of primary cancer or metastasis. Here we describe a new deep
learning pipeline, which entirely avoids the slow and computationally costly
signal processing and feature extraction steps by a convolutional neural
network that directly operates on the measured signals. The improvement in
computational efficiency enables low-latency inference and makes this pipeline
suitable for cell sorting via deep learning. Our neural network takes less than
a few milliseconds to classify the cells, fast enough to provide a decision to
a cell sorter for real-time separation of individual target cells. We
demonstrate the applicability of our new method in the classification of OT-II
white blood cells and SW-480 epithelial cancer cells with more than 95%
accuracy in a label-free fashion
Recommended from our members
Investigation of Machine Learning Approaches for Traumatic Brain Injury Classification via EEG Assessment in Mice.
Due to the difficulties and complications in the quantitative assessment of traumatic brain injury (TBI) and its increasing relevance in today's world, robust detection of TBI has become more significant than ever. In this work, we investigate several machine learning approaches to assess their performance in classifying electroencephalogram (EEG) data of TBI in a mouse model. Algorithms such as decision trees (DT), random forest (RF), neural network (NN), support vector machine (SVM), K-nearest neighbors (KNN) and convolutional neural network (CNN) were analyzed based on their performance to classify mild TBI (mTBI) data from those of the control group in wake stages for different epoch lengths. Average power in different frequency sub-bands and alpha:theta power ratio in EEG were used as input features for machine learning approaches. Results in this mouse model were promising, suggesting similar approaches may be applicable to detect TBI in humans in practical scenarios
Ambient Sound Helps: Audiovisual Crowd Counting in Extreme Conditions
Visual crowd counting has been recently studied as a way to enable people
counting in crowd scenes from images. Albeit successful, vision-based crowd
counting approaches could fail to capture informative features in extreme
conditions, e.g., imaging at night and occlusion. In this work, we introduce a
novel task of audiovisual crowd counting, in which visual and auditory
information are integrated for counting purposes. We collect a large-scale
benchmark, named auDiovISual Crowd cOunting (DISCO) dataset, consisting of
1,935 images and the corresponding audio clips, and 170,270 annotated
instances. In order to fuse the two modalities, we make use of a linear
feature-wise fusion module that carries out an affine transformation on visual
and auditory features. Finally, we conduct extensive experiments using the
proposed dataset and approach. Experimental results show that introducing
auditory information can benefit crowd counting under different illumination,
noise, and occlusion conditions. The dataset and code will be released. Code
and data have been made availabl
- …