2,248 research outputs found

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    A comprehensive survey on reinforcement-learning-based computation offloading techniques in Edge Computing Systems

    Get PDF
    Producción CientíficaIn recent years, the number of embedded computing devices connected to the Internet has exponentially increased. At the same time, new applications are becoming more complex and computationally demanding, which can be a problem for devices, especially when they are battery powered. In this context, the concepts of computation offloading and edge computing, which allow applications to be fully or partially offloaded and executed on servers close to the devices in the network, have arisen and received increasing attention. Then, the design of algorithms to make the decision of which applications or tasks should be offloaded, and where to execute them, is crucial. One of the options that has been gaining momentum lately is the use of Reinforcement Learning (RL) and, in particular, Deep Reinforcement Learning (DRL), which enables learning optimal or near-optimal offloading policies adapted to each particular scenario. Although the use of RL techniques to solve the computation offloading problem in edge systems has been covered by some surveys, it has been done in a limited way. For example, some surveys have analysed the use of RL to solve various networking problems, with computation offloading being one of them, but not the primary focus. Other surveys, on the other hand, have reviewed techniques to solve the computation offloading problem, being RL just one of the approaches considered. To the best of our knowledge, this is the first survey that specifically focuses on the use of RL and DRL techniques for computation offloading in edge computing system. We present a comprehensive and detailed survey, where we analyse and classify the research papers in terms of use cases, network and edge computing architectures, objectives, RL algorithms, decision-making approaches, and time-varying characteristics considered in the analysed scenarios. In particular, we include a series of tables to help researchers identify relevant papers based on specific features, and analyse which scenarios and techniques are most frequently considered in the literature. Finally, this survey identifies a number of research challenges, future directions and areas for further study.Consejería de Educación de la Junta de Castilla y León y FEDER (VA231P20)Ministerio de Ciencia e Innovación y Agencia Estatal de Investigación (Proyecto PID2020-112675RB-C42, PID2021-124463OBI00 y RED2018-102585-T, financiados por MCIN/AEI/10.13039/501100011033

    Internet of Vehicles and Real-Time Optimization Algorithms: Concepts for Vehicle Networking in Smart Cities

    Get PDF
    Achieving sustainable freight transport and citizens’ mobility operations in modern cities are becoming critical issues for many governments. By analyzing big data streams generated through IoT devices, city planners now have the possibility to optimize traffic and mobility patterns. IoT combined with innovative transport concepts as well as emerging mobility modes (e.g., ridesharing and carsharing) constitute a new paradigm in sustainable and optimized traffic operations in smart cities. Still, these are highly dynamic scenarios, which are also subject to a high uncertainty degree. Hence, factors such as real-time optimization and re-optimization of routes, stochastic travel times, and evolving customers’ requirements and traffic status also have to be considered. This paper discusses the main challenges associated with Internet of Vehicles (IoV) and vehicle networking scenarios, identifies the underlying optimization problems that need to be solved in real time, and proposes an approach to combine the use of IoV with parallelization approaches. To this aim, agile optimization and distributed machine learning are envisaged as the best candidate algorithms to develop efficient transport and mobility systems
    corecore