516 research outputs found
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Machine Learning in IoT Security:Current Solutions and Future Challenges
The future Internet of Things (IoT) will have a deep economical, commercial
and social impact on our lives. The participating nodes in IoT networks are
usually resource-constrained, which makes them luring targets for cyber
attacks. In this regard, extensive efforts have been made to address the
security and privacy issues in IoT networks primarily through traditional
cryptographic approaches. However, the unique characteristics of IoT nodes
render the existing solutions insufficient to encompass the entire security
spectrum of the IoT networks. This is, at least in part, because of the
resource constraints, heterogeneity, massive real-time data generated by the
IoT devices, and the extensively dynamic behavior of the networks. Therefore,
Machine Learning (ML) and Deep Learning (DL) techniques, which are able to
provide embedded intelligence in the IoT devices and networks, are leveraged to
cope with different security problems. In this paper, we systematically review
the security requirements, attack vectors, and the current security solutions
for the IoT networks. We then shed light on the gaps in these security
solutions that call for ML and DL approaches. We also discuss in detail the
existing ML and DL solutions for addressing different security problems in IoT
networks. At last, based on the detailed investigation of the existing
solutions in the literature, we discuss the future research directions for ML-
and DL-based IoT security
Artificial Intelligence based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives
Enormous amounts of data are being produced everyday by sub-meters and smart
sensors installed in residential buildings. If leveraged properly, that data
could assist end-users, energy producers and utility companies in detecting
anomalous power consumption and understanding the causes of each anomaly.
Therefore, anomaly detection could stop a minor problem becoming overwhelming.
Moreover, it will aid in better decision-making to reduce wasted energy and
promote sustainable and energy efficient behavior. In this regard, this paper
is an in-depth review of existing anomaly detection frameworks for building
energy consumption based on artificial intelligence. Specifically, an extensive
survey is presented, in which a comprehensive taxonomy is introduced to
classify existing algorithms based on different modules and parameters adopted,
such as machine learning algorithms, feature extraction approaches, anomaly
detection levels, computing platforms and application scenarios. To the best of
the authors' knowledge, this is the first review article that discusses anomaly
detection in building energy consumption. Moving forward, important findings
along with domain-specific problems, difficulties and challenges that remain
unresolved are thoroughly discussed, including the absence of: (i) precise
definitions of anomalous power consumption, (ii) annotated datasets, (iii)
unified metrics to assess the performance of existing solutions, (iv) platforms
for reproducibility and (v) privacy-preservation. Following, insights about
current research trends are discussed to widen the applications and
effectiveness of the anomaly detection technology before deriving future
directions attracting significant attention. This article serves as a
comprehensive reference to understand the current technological progress in
anomaly detection of energy consumption based on artificial intelligence.Comment: 11 Figures, 3 Table
Non-iterative and Fast Deep Learning: Multilayer Extreme Learning Machines
In the past decade, deep learning techniques have powered many aspects of our daily life, and drawn ever-increasing research interests. However, conventional deep learning approaches, such as deep belief network (DBN), restricted Boltzmann machine (RBM), and convolutional neural network (CNN), suffer from time-consuming training process due to fine-tuning of a large number of parameters and the complicated hierarchical structure. Furthermore, the above complication makes it difficult to theoretically analyze and prove the universal approximation of those conventional deep learning approaches. In order to tackle the issues, multilayer extreme learning machines (ML-ELM) were proposed, which accelerate the development of deep learning. Compared with conventional deep learning, ML-ELMs are non-iterative and fast due to the random feature mapping mechanism. In this paper, we perform a thorough review on the development of ML-ELMs, including stacked ELM autoencoder (ELM-AE), residual ELM, and local receptive field based ELM (ELM-LRF), as well as address their applications. In addition, we also discuss the connection between random neural networks and conventional deep learning
Monitoring the waste to energy plant using the latest AI methods and tools
Solid wastes for instance, municipal and industrial wastes present great environmental concerns and challenges all over the world. This has led to development of innovative waste-to-energy process technologies capable of handling different waste materials in a more sustainable and energy efficient manner. However, like in many other complex industrial process operations, waste-to-energy plants would require sophisticated process monitoring systems in order to realize very high overall plant efficiencies. Conventional data-driven statistical methods which include principal component analysis, partial least squares, multivariable linear regression and so forth, are normally applied in process monitoring. But recently, latest artificial intelligence (AI) methods in particular deep learning algorithms have demostrated remarkable performances in several important areas such as machine vision, natural language processing and pattern recognition. The new AI algorithms have gained increasing attention from the process industrial applications for instance in areas such as predictive product quality control and machine health monitoring. Moreover, the availability of big-data processing tools and cloud computing technologies further support the use of deep learning based algorithms for process monitoring.
In this work, a process monitoring scheme based on the state-of-the-art artificial intelligence methods and cloud computing platforms is proposed for a waste-to-energy industrial use case. The monitoring scheme supports use of latest AI methods, laveraging big-data processing tools and taking advantage of available cloud computing platforms. Deep learning algorithms are able to describe non-linear, dynamic and high demensionality systems better than most conventional data-based process monitoring methods. Moreover, deep learning based methods are best suited for big-data analytics unlike traditional statistical machine learning methods which are less efficient.
Furthermore, the proposed monitoring scheme emphasizes real-time process monitoring in addition to offline data analysis. To achieve this the monitoring scheme proposes use of big-data analytics software frameworks and tools such as Microsoft Azure stream analytics, Apache storm, Apache Spark, Hadoop and many others. The availability of open source in addition to proprietary cloud computing platforms, AI and big-data software tools, all support the realization of the proposed monitoring scheme
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
Artificial intelligence (AI) continues to transform data analysis in many
domains. Progress in each domain is driven by a growing body of annotated data,
increased computational resources, and technological innovations. In medicine,
the sensitivity of the data, the complexity of the tasks, the potentially high
stakes, and a requirement of accountability give rise to a particular set of
challenges. In this review, we focus on three key methodological approaches
that address some of the particular challenges in AI-driven medical decision
making. (1) Explainable AI aims to produce a human-interpretable justification
for each output. Such models increase confidence if the results appear
plausible and match the clinicians expectations. However, the absence of a
plausible explanation does not imply an inaccurate model. Especially in highly
non-linear, complex models that are tuned to maximize accuracy, such
interpretable representations only reflect a small portion of the
justification. (2) Domain adaptation and transfer learning enable AI models to
be trained and applied across multiple domains. For example, a classification
task based on images acquired on different acquisition hardware. (3) Federated
learning enables learning large-scale models without exposing sensitive
personal health information. Unlike centralized AI learning, where the
centralized learning machine has access to the entire training data, the
federated learning process iteratively updates models across multiple sites by
exchanging only parameter updates, not personal health data. This narrative
review covers the basic concepts, highlights relevant corner-stone and
state-of-the-art research in the field, and discusses perspectives.Comment: This paper is accepted in IEEE CAA Journal of Automatica Sinica, Nov.
10 202
- …