4,059 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Privacy-Aware Load Balancing in Fog Networks: A Reinforcement Learning Approach
Fog Computing has emerged as a solution to support the growing demands of
real-time Internet of Things (IoT) applications, which require high
availability of these distributed services. Intelligent workload distribution
algorithms are needed to maximize the utilization of such Fog resources while
minimizing the time required to process these workloads. These load balancing
algorithms are critical in dynamic environments with heterogeneous resources
and workload requirements along with unpredictable traffic demands. In this
paper, load balancing is provided using a Reinforcement Learning (RL)
algorithm, which optimizes the system performance by minimizing the waiting
delay of IoT workloads. Unlike previous studies, the proposed solution does not
require load and resource information from Fog nodes, which makes the algorithm
dynamically adaptable to possible environment changes over time. This also
makes the algorithm aware of the privacy requirements of Fog service providers,
who might like to hide such information to prevent competing providers from
calculating better pricing strategies. The proposed algorithm is interactively
evaluated on a Discrete-event Simulator (DES) to mimic a practical deployment
of the solution in real environments. In addition, we evaluate the algorithm's
generalization ability on simulations longer than what it was trained on,
which, to the best of our knowledge, has never been explored before. The
results provided in this paper show how our proposed approach outperforms
baseline load balancing methods under different workload generation rates.Comment: 9 pages, 9 figures, 1 tabl
Artificial Intelligence for Resilience in Smart Grid Operations
Today, the electric power grid is transforming into a highly interconnected network of advanced technologies, equipment, and controls to enable a smarter grid. The growing complexity of smart grid requires resilient operation and control. Power system resilience is defined as the ability to harden the system against and quickly recover from high-impact, low-frequency events. The introduction of two-way flows of information and electricity in the smart grid raises concerns of cyber-physical attacks. Proliferated penetration of renewable energy sources such as solar photovoltaic (PV) and wind power introduce challenges due to the high variability and uncertainty in generation. Unintentional disruptions and power system component outages have become a threat to real-time power system operations. Recent extreme weather events and natural disasters such as hurricanes, storms, and wildfires demonstrate the importance of resilience in the power system. It is essential to find solutions to overcome these challenges in maintaining resilience in smart grid.
In this dissertation, artificial intelligence (AI) based approaches have been developed to enhance resilience in smart grid. Methods for optimal automatic generation control (AGC) have been developed for multi-area multi-machine power systems. Reliable AI models have been developed for predicting solar irradiance, PV power generation, and power system frequencies. The proposed short-horizon AI prediction models ranging from few seconds to a minute plus, outperform the state-of-art persistence models. The AI prediction models have been applied to provide situational intelligence for power system operations. An enhanced tie-line bias control in a multi-area power system for variable and uncertain environments has been developed with predicted PV power and bus frequencies. A distributed and parallel security-constrained optimal power flow (SCOPF) algorithm has been developed to overcome the challenges in solving SCOPF problem for large power networks. The methods have been developed and tested on an experimental laboratory platform consisting of real-time digital simulators, hardware/software phasor measurement units, and a real-time weather station
Approaches for Future Internet architecture design and Quality of Experience (QoE) Control
Researching a Future Internet capable of overcoming the current Internet limitations is a strategic
investment. In this respect, this paper presents some concepts that can contribute to provide some guidelines to
overcome the above-mentioned limitations. In the authors' vision, a key Future Internet target is to allow
applications to transparently, efficiently and flexibly exploit the available network resources with the aim to
match the users' expectations. Such expectations could be expressed in terms of a properly defined Quality of
Experience (QoE). In this respect, this paper provides some approaches for coping with the QoE provision
problem
A scalable multi-core architecture with heterogeneous memory structures for Dynamic Neuromorphic Asynchronous Processors (DYNAPs)
Neuromorphic computing systems comprise networks of neurons that use
asynchronous events for both computation and communication. This type of
representation offers several advantages in terms of bandwidth and power
consumption in neuromorphic electronic systems. However, managing the traffic
of asynchronous events in large scale systems is a daunting task, both in terms
of circuit complexity and memory requirements. Here we present a novel routing
methodology that employs both hierarchical and mesh routing strategies and
combines heterogeneous memory structures for minimizing both memory
requirements and latency, while maximizing programming flexibility to support a
wide range of event-based neural network architectures, through parameter
configuration. We validated the proposed scheme in a prototype multi-core
neuromorphic processor chip that employs hybrid analog/digital circuits for
emulating synapse and neuron dynamics together with asynchronous digital
circuits for managing the address-event traffic. We present a theoretical
analysis of the proposed connectivity scheme, describe the methods and circuits
used to implement such scheme, and characterize the prototype chip. Finally, we
demonstrate the use of the neuromorphic processor with a convolutional neural
network for the real-time classification of visual symbols being flashed to a
dynamic vision sensor (DVS) at high speed.Comment: 17 pages, 14 figure
- …