42 research outputs found
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems
Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033
Goal-Oriented Scheduling in Sensor Networks with Application Timing Awareness
Taking inspiration from linguistics, the communications theoretical community
has recently shown a significant recent interest in pragmatic , or
goal-oriented, communication. In this paper, we tackle the problem of pragmatic
communication with multiple clients with different, and potentially
conflicting, objectives. We capture the goal-oriented aspect through the metric
of Value of Information (VoI), which considers the estimation of the remote
process as well as the timing constraints. However, the most common definition
of VoI is simply the Mean Square Error (MSE) of the whole system state,
regardless of the relevance for a specific client. Our work aims to overcome
this limitation by including different summary statistics, i.e., value
functions of the state, for separate clients, and a diversified query process
on the client side, expressed through the fact that different applications may
request different functions of the process state at different times. A
query-aware Deep Reinforcement Learning (DRL) solution based on statically
defined VoI can outperform naive approaches by 15-20%
Goal-Oriented Scheduling in Sensor Networks With Application Timing Awareness
— Taking inspiration from linguistics, the communications theoretical community has recently shown a significant recent interest in pragmatic, or goal-oriented, communication. In this paper, we tackle the problem of pragmatic communication with multiple clients with different, and potentially conflicting, objectives. We capture the goal-oriented aspect through the metric of Value of Information (VoI), which considers the estimation of the remote process as well as the timing constraints. However, the most common definition of VoI is simply the Mean Square Error (MSE) of the whole system state, regardless of the relevance for a specific client. Our work aims to overcome this limitation by including different summary statistics, i.e., value functions of the state, for separate clients, and a diversified query process on the client side, expressed through the fact that different applications may request different functions of the process state at different times. A query-aware Deep Reinforcement Learning (DRL) solution based on statically defined VoI can outperform naive approaches by 15-20%
Reinforcement and deep reinforcement learning for wireless Internet of Things: A survey
International audienceNowadays, many research studies and industrial investigations have allowed the integration of the Internet of Things (IoT) in current and future networking applications by deploying a diversity of wireless-enabled devices ranging from smartphones, wearables, to sensors, drones, and connected vehicles. The growing number of IoT devices, the increasing complexity of IoT systems, and the large volume of generated data have made the monitoring and management of these networks extremely difficult. Numerous research papers have applied Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) techniques to overcome these difficulties by building IoT systems with effective and dynamic decision-making mechanisms, dealing with incomplete information related to their environments. The paper first reviews pre-existing surveys covering the application of RL and DRL techniques in IoT communication technologies and networking. The paper then analyzes the research papers that apply these techniques in wireless IoT to resolve issues related to routing, scheduling, resource allocation, dynamic spectrum access, energy, mobility, and caching. Finally, a discussion of the proposed approaches and their limits is followed by the identification of open issues to establish grounds for future research directions proposal
Machine Learning-Aided Operations and Communications of Unmanned Aerial Vehicles: A Contemporary Survey
The ongoing amalgamation of UAV and ML techniques is creating a significant
synergy and empowering UAVs with unprecedented intelligence and autonomy. This
survey aims to provide a timely and comprehensive overview of ML techniques
used in UAV operations and communications and identify the potential growth
areas and research gaps. We emphasise the four key components of UAV operations
and communications to which ML can significantly contribute, namely, perception
and feature extraction, feature interpretation and regeneration, trajectory and
mission planning, and aerodynamic control and operation. We classify the latest
popular ML tools based on their applications to the four components and conduct
gap analyses. This survey also takes a step forward by pointing out significant
challenges in the upcoming realm of ML-aided automated UAV operations and
communications. It is revealed that different ML techniques dominate the
applications to the four key modules of UAV operations and communications.
While there is an increasing trend of cross-module designs, little effort has
been devoted to an end-to-end ML framework, from perception and feature
extraction to aerodynamic control and operation. It is also unveiled that the
reliability and trust of ML in UAV operations and applications require
significant attention before full automation of UAVs and potential cooperation
between UAVs and humans come to fruition.Comment: 36 pages, 304 references, 19 Figure