544 research outputs found
Big Data and the Internet of Things
Advances in sensing and computing capabilities are making it possible to
embed increasing computing power in small devices. This has enabled the sensing
devices not just to passively capture data at very high resolution but also to
take sophisticated actions in response. Combined with advances in
communication, this is resulting in an ecosystem of highly interconnected
devices referred to as the Internet of Things - IoT. In conjunction, the
advances in machine learning have allowed building models on this ever
increasing amounts of data. Consequently, devices all the way from heavy assets
such as aircraft engines to wearables such as health monitors can all now not
only generate massive amounts of data but can draw back on aggregate analytics
to "improve" their performance over time. Big data analytics has been
identified as a key enabler for the IoT. In this chapter, we discuss various
avenues of the IoT where big data analytics either is already making a
significant impact or is on the cusp of doing so. We also discuss social
implications and areas of concern.Comment: 33 pages. draft of upcoming book chapter in Japkowicz and Stefanowski
(eds.) Big Data Analysis: New algorithms for a new society, Springer Series
on Studies in Big Data, to appea
Robust Learning Enabled Intelligence for the Internet-of-Things: A Survey From the Perspectives of Noisy Data and Adversarial Examples
This is the author accepted manuscript. The final version is available from IEEE via the DOI in this recordThe Internet-of-Things (IoT) has been widely adopted in a range of verticals, e.g., automation, health, energy and manufacturing. Many of the applications in these sectors, such as self-driving cars and remote surgery, are critical and high stakes applications, calling for advanced machine learning (ML) models for data analytics. Essentially, the training and testing data that are collected by massive IoT devices may contain noise (e.g., abnormal data, incorrect labels and incomplete information) and adversarial examples. This requires high robustness of ML models to make reliable decisions for IoT applications. The research of robust ML has received tremendous attentions from both academia and industry in recent years. This paper will investigate the state-of-the-art and representative works of robust ML models that can enable high resilience and reliability of IoT intelligence. Two aspects of robustness will be focused on, i.e., when the training data of ML models contains noises and adversarial examples, which may typically happen in many real-world IoT scenarios. In addition, the reliability of both neural networks and reinforcement learning framework will be investigated. Both of these two machine learning paradigms have been widely used in handling data in IoT scenarios. The potential research challenges and open issues will be discussed to provide future research directions.Engineering and Physical Sciences Research Council (EPSRC
Trustworthy Edge Machine Learning: A Survey
The convergence of Edge Computing (EC) and Machine Learning (ML), known as
Edge Machine Learning (EML), has become a highly regarded research area by
utilizing distributed network resources to perform joint training and inference
in a cooperative manner. However, EML faces various challenges due to resource
constraints, heterogeneous network environments, and diverse service
requirements of different applications, which together affect the
trustworthiness of EML in the eyes of its stakeholders. This survey provides a
comprehensive summary of definitions, attributes, frameworks, techniques, and
solutions for trustworthy EML. Specifically, we first emphasize the importance
of trustworthy EML within the context of Sixth-Generation (6G) networks. We
then discuss the necessity of trustworthiness from the perspective of
challenges encountered during deployment and real-world application scenarios.
Subsequently, we provide a preliminary definition of trustworthy EML and
explore its key attributes. Following this, we introduce fundamental frameworks
and enabling technologies for trustworthy EML systems, and provide an in-depth
literature review of the latest solutions to enhance trustworthiness of EML.
Finally, we discuss corresponding research challenges and open issues.Comment: 27 pages, 7 figures, 10 table
Paralinguistic Privacy Protection at the Edge
Voice user interfaces and digital assistants are rapidly entering our lives
and becoming singular touch points spanning our devices. These always-on
services capture and transmit our audio data to powerful cloud services for
further processing and subsequent actions. Our voices and raw audio signals
collected through these devices contain a host of sensitive paralinguistic
information that is transmitted to service providers regardless of deliberate
or false triggers. As our emotional patterns and sensitive attributes like our
identity, gender, mental well-being, are easily inferred using deep acoustic
models, we encounter a new generation of privacy risks by using these services.
One approach to mitigate the risk of paralinguistic-based privacy breaches is
to exploit a combination of cloud-based processing with privacy-preserving,
on-device paralinguistic information learning and filtering before transmitting
voice data. In this paper we introduce EDGY, a configurable, lightweight,
disentangled representation learning framework that transforms and filters
high-dimensional voice data to identify and contain sensitive attributes at the
edge prior to offloading to the cloud. We evaluate EDGY's on-device performance
and explore optimization techniques, including model quantization and knowledge
distillation, to enable private, accurate and efficient representation learning
on resource-constrained devices. Our results show that EDGY runs in tens of
milliseconds with 0.2% relative improvement in ABX score or minimal performance
penalties in learning linguistic representations from raw voice signals, using
a CPU and a single-core ARM processor without specialized hardware.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with
arXiv:2007.1506
An Overview of Human Activity Recognition Using Wearable Sensors: Healthcare and Artificial Intelligence
With the rapid development of the internet of things (IoT) and artificial
intelligence (AI) technologies, human activity recognition (HAR) has been
applied in a variety of domains such as security and surveillance, human-robot
interaction, and entertainment. Even though a number of surveys and review
papers have been published, there is a lack of HAR overview papers focusing on
healthcare applications that use wearable sensors. Therefore, we fill in the
gap by presenting this overview paper. In particular, we present our projects
to illustrate the system design of HAR applications for healthcare. Our
projects include early mobility identification of human activities for
intensive care unit (ICU) patients and gait analysis of Duchenne muscular
dystrophy (DMD) patients. We cover essential components of designing HAR
systems including sensor factors (e.g., type, number, and placement location),
AI model selection (e.g., classical machine learning models versus deep
learning models), and feature engineering. In addition, we highlight the
challenges of such healthcare-oriented HAR systems and propose several research
opportunities for both the medical and the computer science community
A framework to detect cyber-attacks against networked medical devices (Internet of Medical Things):an attack-surface-reduction by design approach
Most medical devices in the healthcare system are not built-in security concepts. Hence, these devices' built-in vulnerabilities prone them to various cyber-attacks when connected to a hospital network or cloud. Attackers can penetrate devices, tamper, and disrupt services in hospitals and clinics, which results in threatening patients' health and life. A specialist can Manage Cyber-attacks risks by reducing the system's attack surface. Attack surface analysis, either as a potential source for exploiting a potential vulnerability by attackers or as a medium to reduce cyber-attacks play a significant role in mitigating risks. Furthermore, it is necessitated to perform attack surface analysis in the design phase. This research proposes a framework that integrates attack surface concepts into the design and development of medical devices. Devices are classified as high-risk, medium-risk, and low-risk. After risk assessment, the employed classification algorithm detects and analyzes the attack surfaces. Accordingly, the relevant adapted security controls will be prompted to hinder the attack. The simulation and evaluation of the framework is the subject of further research.</p
- …