2,929 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Securing Real-Time Internet-of-Things
Modern embedded and cyber-physical systems are ubiquitous. A large number of
critical cyber-physical systems have real-time requirements (e.g., avionics,
automobiles, power grids, manufacturing systems, industrial control systems,
etc.). Recent developments and new functionality requires real-time embedded
devices to be connected to the Internet. This gives rise to the real-time
Internet-of-things (RT-IoT) that promises a better user experience through
stronger connectivity and efficient use of next-generation embedded devices.
However RT- IoT are also increasingly becoming targets for cyber-attacks which
is exacerbated by this increased connectivity. This paper gives an introduction
to RT-IoT systems, an outlook of current approaches and possible research
challenges towards secure RT- IoT frameworks
IoT Anomaly Detection Methods and Applications: A Survey
Ongoing research on anomaly detection for the Internet of Things (IoT) is a
rapidly expanding field. This growth necessitates an examination of application
trends and current gaps. The vast majority of those publications are in areas
such as network and infrastructure security, sensor monitoring, smart home, and
smart city applications and are extending into even more sectors. Recent
advancements in the field have increased the necessity to study the many IoT
anomaly detection applications. This paper begins with a summary of the
detection methods and applications, accompanied by a discussion of the
categorization of IoT anomaly detection algorithms. We then discuss the current
publications to identify distinct application domains, examining papers chosen
based on our search criteria. The survey considers 64 papers among recent
publications published between January 2019 and July 2021. In recent
publications, we observed a shortage of IoT anomaly detection methodologies,
for example, when dealing with the integration of systems with various sensors,
data and concept drifts, and data augmentation where there is a shortage of
Ground Truth data. Finally, we discuss the present such challenges and offer
new perspectives where further research is required.Comment: 22 page
Real-time big data processing for anomaly detection : a survey
The advent of connected devices and omnipresence of Internet have paved way for intruders to attack networks, which leads to cyber-attack, financial loss, information theft in healthcare, and cyber war. Hence, network security analytics has become an important area of concern and has gained intensive attention among researchers, off late, specifically in the domain of anomaly detection in network, which is considered crucial for network security. However, preliminary investigations have revealed that the existing approaches to detect anomalies in network are not effective enough, particularly to detect them in real time. The reason for the inefficacy of current approaches is mainly due the amassment of massive volumes of data though the connected devices. Therefore, it is crucial to propose a framework that effectively handles real time big data processing and detect anomalies in networks. In this regard, this paper attempts to address the issue of detecting anomalies in real time. Respectively, this paper has surveyed the state-of-the-art real-time big data processing technologies related to anomaly detection and the vital characteristics of associated machine learning algorithms. This paper begins with the explanation of essential contexts and taxonomy of real-time big data processing, anomalous detection, and machine learning algorithms, followed by the review of big data processing technologies. Finally, the identified research challenges of real-time big data processing in anomaly detection are discussed. © 2018 Elsevier Lt
Machine Learning-Enabled IoT Security: Open Issues and Challenges Under Advanced Persistent Threats
Despite its technological benefits, Internet of Things (IoT) has cyber
weaknesses due to the vulnerabilities in the wireless medium. Machine learning
(ML)-based methods are widely used against cyber threats in IoT networks with
promising performance. Advanced persistent threat (APT) is prominent for
cybercriminals to compromise networks, and it is crucial to long-term and
harmful characteristics. However, it is difficult to apply ML-based approaches
to identify APT attacks to obtain a promising detection performance due to an
extremely small percentage among normal traffic. There are limited surveys to
fully investigate APT attacks in IoT networks due to the lack of public
datasets with all types of APT attacks. It is worth to bridge the
state-of-the-art in network attack detection with APT attack detection in a
comprehensive review article. This survey article reviews the security
challenges in IoT networks and presents the well-known attacks, APT attacks,
and threat models in IoT systems. Meanwhile, signature-based, anomaly-based,
and hybrid intrusion detection systems are summarized for IoT networks. The
article highlights statistical insights regarding frequently applied ML-based
methods against network intrusion alongside the number of attacks types
detected. Finally, open issues and challenges for common network intrusion and
APT attacks are presented for future research.Comment: ACM Computing Surveys, 2022, 35 pages, 10 Figures, 8 Table
Deep Anomaly Detection for Time-series Data in Industrial IoT: A Communication-Efficient On-device Federated Learning Approach
Since edge device failures (i.e., anomalies) seriously affect the production
of industrial products in Industrial IoT (IIoT), accurately and timely
detecting anomalies is becoming increasingly important. Furthermore, data
collected by the edge device may contain the user's private data, which is
challenging the current detection approaches as user privacy is calling for the
public concern in recent years. With this focus, this paper proposes a new
communication-efficient on-device federated learning (FL)-based deep anomaly
detection framework for sensing time-series data in IIoT. Specifically, we
first introduce a FL framework to enable decentralized edge devices to
collaboratively train an anomaly detection model, which can improve its
generalization ability. Second, we propose an Attention Mechanism-based
Convolutional Neural Network-Long Short Term Memory (AMCNN-LSTM) model to
accurately detect anomalies. The AMCNN-LSTM model uses attention
mechanism-based CNN units to capture important fine-grained features, thereby
preventing memory loss and gradient dispersion problems. Furthermore, this
model retains the advantages of LSTM unit in predicting time series data.
Third, to adapt the proposed framework to the timeliness of industrial anomaly
detection, we propose a gradient compression mechanism based on Top-\textit{k}
selection to improve communication efficiency. Extensive experiment studies on
four real-world datasets demonstrate that the proposed framework can accurately
and timely detect anomalies and also reduce the communication overhead by 50\%
compared to the federated learning framework that does not use a gradient
compression scheme.Comment: IEEE Internet of Things Journa
- …