4,729 research outputs found
An Experimental Study of Network Coded REST HTTP in Dynamic IoT Systems
REST HTTP is the communication protocol of choice for software developers
today. In IoT systems with unreliable connectivity, however, a stateless
protocol like REST HTTP needs to send a request message multiple times, and it
only stops the retransmissions when an acknowledgement arrives at the sender.
In our previous work, we studied the usage of random linear network coding
(RLNC) for REST HTTP protocol to reducing the amount of unnecessarily
retransmissions. In this paper, we experimentally validate the study and
analyze REST HTTP with and without RLNC in a simple testbed in dynamic IoT
systems. The measurements show notable improvements in bandwidth utilization in
terms of reducing the retransmissions and delay when using network-coded REST
HTTP.Comment: 7 pages, 5 figures, accepted at IEEE International Conference on
Communications (ICC), Dublin, Ireland, 202
Detection for 5G-NOMA: An Online Adaptive Machine Learning Approach
Non-orthogonal multiple access (NOMA) has emerged as a promising radio access
technique for enabling the performance enhancements promised by the
fifth-generation (5G) networks in terms of connectivity, low latency, and high
spectrum efficiency. In the NOMA uplink, successive interference cancellation
(SIC) based detection with device clustering has been suggested. In the case of
multiple receive antennas, SIC can be combined with the minimum mean-squared
error (MMSE) beamforming. However, there exists a tradeoff between the NOMA
cluster size and the incurred SIC error. Larger clusters lead to larger errors
but they are desirable from the spectrum efficiency and connectivity point of
view. We propose a novel online learning based detection for the NOMA uplink.
In particular, we design an online adaptive filter in the sum space of linear
and Gaussian reproducing kernel Hilbert spaces (RKHSs). Such a sum space design
is robust against variations of a dynamic wireless network that can deteriorate
the performance of a purely nonlinear adaptive filter. We demonstrate by
simulations that the proposed method outperforms the MMSE-SIC based detection
for large cluster sizes.Comment: Accepted at ICC 201
Detecting Irregular Patterns in IoT Streaming Data for Fall Detection
Detecting patterns in real time streaming data has been an interesting and
challenging data analytics problem. With the proliferation of a variety of
sensor devices, real-time analytics of data from the Internet of Things (IoT)
to learn regular and irregular patterns has become an important machine
learning problem to enable predictive analytics for automated notification and
decision support. In this work, we address the problem of learning an irregular
human activity pattern, fall, from streaming IoT data from wearable sensors. We
present a deep neural network model for detecting fall based on accelerometer
data giving 98.75 percent accuracy using an online physical activity monitoring
dataset called "MobiAct", which was published by Vavoulas et al. The initial
model was developed using IBM Watson studio and then later transferred and
deployed on IBM Cloud with the streaming analytics service supported by IBM
Streams for monitoring real-time IoT data. We also present the systems
architecture of the real-time fall detection framework that we intend to use
with mbientlabs wearable health monitoring sensors for real time patient
monitoring at retirement homes or rehabilitation clinics.Comment: 7 page
A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones
Fully-autonomous miniaturized robots (e.g., drones), with artificial
intelligence (AI) based visual navigation capabilities are extremely
challenging drivers of Internet-of-Things edge intelligence capabilities.
Visual navigation based on AI approaches, such as deep neural networks (DNNs)
are becoming pervasive for standard-size drones, but are considered out of
reach for nanodrones with size of a few cm. In this work, we
present the first (to the best of our knowledge) demonstration of a navigation
engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based
visual navigation. To achieve this goal we developed a complete methodology for
parallel execution of complex DNNs directly on-bard of resource-constrained
milliwatt-scale nodes. Our system is based on GAP8, a novel parallel
ultra-low-power computing platform, and a 27 g commercial, open-source
CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the
software mapping techniques that enable the state-of-the-art deep convolutional
neural network presented in [1] to be fully executed on-board within a strict 6
fps real-time constraint with no compromise in terms of flight results, while
all processing is done with only 64 mW on average. Our navigation engine is
flexible and can be used to span a wide performance range: at its peak
performance corner it achieves 18 fps while still consuming on average just
3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication
in the IEEE Internet of Things Journal (IEEE IOTJ
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
On the Fundamental Limits of Random Non-orthogonal Multiple Access in Cellular Massive IoT
Machine-to-machine (M2M) constitutes the communication paradigm at the basis
of Internet of Things (IoT) vision. M2M solutions allow billions of multi-role
devices to communicate with each other or with the underlying data transport
infrastructure without, or with minimal, human intervention. Current solutions
for wireless transmissions originally designed for human-based applications
thus require a substantial shift to cope with the capacity issues in managing a
huge amount of M2M devices. In this paper, we consider the multiple access
techniques as promising solutions to support a large number of devices in
cellular systems with limited radio resources. We focus on non-orthogonal
multiple access (NOMA) where, with the aim to increase the channel efficiency,
the devices share the same radio resources for their data transmission. This
has been shown to provide optimal throughput from an information theoretic
point of view.We consider a realistic system model and characterise the system
performance in terms of throughput and energy efficiency in a NOMA scenario
with a random packet arrival model, where we also derive the stability
condition for the system to guarantee the performance.Comment: To appear in IEEE JSAC Special Issue on Non-Orthogonal Multiple
Access for 5G System
- …