574 research outputs found
Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks
An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks
Joint Communication and Computation Design in Transmissive RMS Transceiver Enabled Multi-Tier Computing Networks
In this paper, a novel transmissive reconfigurable meta-surface (RMS)
transceiver enabled multi-tier computing network architecture is proposed for
improving computing capability, decreasing computing delay and reducing base
station (BS) deployment cost, in which transmissive RMS equipped with a feed
antenna can be regarded as a new type of multi-antenna system. We formulate a
total energy consumption minimization problem by a joint optimization of
subcarrier allocation, task input bits, time slot allocation, transmit power
allocation and RMS transmissive coefficient while taking into account the
constraints of communication resources and computing resources. This formulated
problem is a non-convex optimization problem due to the high coupling of
optimization variables, which is NP-hard to obtain its optimal solution. To
address the above challenging problems, block coordinate descent (BCD)
technique is employed to decouple the optimization variables to solve the
problem. Specifically, the joint optimization problem of subcarrier allocation,
task input bits, time slot allocation, transmit power allocation and RMS
transmissive coefficient is divided into three subproblems to solve by applying
BCD. Then, the decoupled three subproblems are optimized alternately by using
successive convex approximation (SCA) and difference-convex (DC) programming
until the convergence is achieved. Numerical results verify that our proposed
algorithm is superior in reducing total energy consumption compared to other
benchmarks
Pushing AI to Wireless Network Edge: An Overview on Integrated Sensing, Communication, and Computation towards 6G
Pushing artificial intelligence (AI) from central cloud to network edge has
reached board consensus in both industry and academia for materializing the
vision of artificial intelligence of things (AIoT) in the sixth-generation (6G)
era. This gives rise to an emerging research area known as edge intelligence,
which concerns the distillation of human-like intelligence from the huge amount
of data scattered at wireless network edge. In general, realizing edge
intelligence corresponds to the process of sensing, communication, and
computation, which are coupled ingredients for data generation, exchanging, and
processing, respectively. However, conventional wireless networks design the
sensing, communication, and computation separately in a task-agnostic manner,
which encounters difficulties in accommodating the stringent demands of
ultra-low latency, ultra-high reliability, and high capacity in emerging AI
applications such as auto-driving. This thus prompts a new design paradigm of
seamless integrated sensing, communication, and computation (ISCC) in a
task-oriented manner, which comprehensively accounts for the use of the data in
the downstream AI applications. In view of its growing interest, this article
provides a timely overview of ISCC for edge intelligence by introducing its
basic concept, design challenges, and enabling techniques, surveying the
state-of-the-art development, and shedding light on the road ahead
Optimal Computational Power Allocation in Multi-Access Mobile Edge Computing for Blockchain
Blockchain has emerged as a decentralized and trustable ledger for recording and storing digital transactions. The mining process of Blockchain, however, incurs a heavy computational workload for miners to solve the proof-of-work puzzle (i.e., a series of the hashing computation), which is prohibitive from the perspective of the mobile terminals (MTs). The advanced multi-access mobile edge computing (MEC), which enables the MTs to offload part of the computational workloads (for solving the proof-of-work) to the nearby edge-servers (ESs), provides a promising approach to address this issue. By offloading the computational workloads via multi-access MEC, the MTs can effectively increase their successful probabilities when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs which provide the computational resources to the MTs, the MTs need to pay the ESs for the corresponding resource-acquisition costs. Thus, to investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we formulate an optimization problem in which the MTs determine their acquired computational resources from different ESs, with the objective of maximizing the MTs’ social net-reward in the mining process while keeping the fairness among the MTs. In spite of the non-convexity of the formulated problem, we exploit its layered structure and propose efficient distributed algorithms for the MTs to individually determine their optimal computational resources acquired from different ESs. Numerical results are provided to validate the effectiveness of our proposed algorithms and the performance of our proposed multi-access MEC for Blockchain
Optimal Computational Power Allocation in Multi-Access Mobile Edge Computing for Blockchain
Blockchain has emerged as a decentralized and trustable ledger for recording and storing digital transactions. The mining process of Blockchain, however, incurs a heavy computational workload for miners to solve the proof-of-work puzzle (i.e., a series of the hashing computation), which is prohibitive from the perspective of the mobile terminals (MTs). The advanced multi-access mobile edge computing (MEC), which enables the MTs to offload part of the computational workloads (for solving the proof-of-work) to the nearby edge-servers (ESs), provides a promising approach to address this issue. By offloading the computational workloads via multi-access MEC, the MTs can effectively increase their successful probabilities when participating in the mining game and gain the consequent reward (i.e., winning the bitcoin). However, as a compensation to the ESs which provide the computational resources to the MTs, the MTs need to pay the ESs for the corresponding resource-acquisition costs. Thus, to investigate the trade-off between obtaining the computational resources from the ESs (for solving the proof-of-work) and paying for the consequent cost, we formulate an optimization problem in which the MTs determine their acquired computational resources from different ESs, with the objective of maximizing the MTs’ social net-reward in the mining process while keeping the fairness among the MTs. In spite of the non-convexity of the formulated problem, we exploit its layered structure and propose efficient distributed algorithms for the MTs to individually determine their optimal computational resources acquired from different ESs. Numerical results are provided to validate the effectiveness of our proposed algorithms and the performance of our proposed multi-access MEC for Blockchain
Latency and Reliability Aware Edge Computation Offloading in 5G Networks
Empowered by recent technological advances and driven by the ever-growing population density and needs, the conception of 5G has opened up the expectations of what mobile networks are capable of to heights never seen before, promising to unleash a myriad of new business practices and paving the way for a surging number of user equipments to carry out novel service operations. The advent of 5G and networks beyond will hence enable the vision of Internet of Things (IoT) and smart city with its ubiquitous and heterogeneous use cases belonging to various verticals operating on a common underlying infrastructure, such as smart healthcare, autonomous driving, and smart manufacturing, while imposing extreme unprecedented Quality of Service (QoS) requirements in terms of latency and reliability among others. Due to the necessity of those modern services such as traffic coordination, industrial processes, and mission critical applications to perform heavy workload computations on the collected input, IoT devices such as cameras, sensors, and Cyber-Physical Systems (CPSs), which have limited energy and processing capabilities are put under an unusual strain to seamlessly carry out the required service computations. While offloading the devices' workload to cloud data centers with Mobile Cloud Computing (MCC) remains a possible alternative which also brings about a high computation reliability, the latency incurred from this approach would prevent from satisfying the services' QoS requirements, in addition to elevating the load in the network core and backhaul, rendering MCC an inadequate solution for handling the 5G services' required computations. In light of this development, Multi-access Edge Computing (MEC) has been proposed as a cutting edge technology for realizing a low-latency computation offloading by bringing the cloud to the vicinity of end-user devices as processing units co-located within base stations leveraging the virtualization technique. Although it promises to satisfy the stringent latency service requirements, realizing the edge-cloud solution is coupled with various challenges, such as the edge servers' restricted capacity, their reduced processing reliability, the IoT devices' limited offloading energy, the wireless offloading channels' often weak quality, the difficulty to adapt to dynamic environment changes and to under-served networks, and the Network Operators (NOs)' cost-efficiency concerns. In light of those conditions, the NOs are consequently looking to devise efficient innovative computation offloading schemes through leveraging novel technologies and architectures for guaranteeing the seamless provisioning of modern services with their stringent latency and reliability QoS requirements, while ensuring the effective utilization of the various network and devices' available resources. Leveraging a hierarchical arrangement of MEC with second-tier edge servers co-located within aggregation nodes and macro-cells can expand the edge network's capability, while utilizing Unmanned Aerial Vehicles (UAVs) to provision the MEC service via UAV-mounted cloudlets can increase the availability, flexibility, and scalability of the computation offloading solution. Moreover, aiding the MEC system with UAVs and Intelligent Reflecting Surfaces (IRSs) can improve the computation offloading performance by enhancing the wireless communication channels' conditions. By effectively leveraging those novel technologies while tackling their challenges, the edge-cloud paradigm will bring about a tremendous advancement to 5G networks and beyond, opening the door to enabling all sorts of modern and futuristic services.
In this dissertation, we attempt to address key challenges linked to realizing the vision of a low-latency and high-reliability edge computation offloading in modern networks while exploring the aid of multiple 5G network technologies. Towards that end, we provide novel contributions related to the allocation of network and devices' resources as well as the optimization of other offloading parameters, and thereby efficiently utilizing the underlying infrastructure such as to enable energy and cost-efficient computation offloading schemes, by leveraging several customized solutions and optimization techniques. In particular, we first tackle the computation offloading problem considering a multi-tier MEC with a deployed second-tier edge-cloud, where we optimize its use through proposed low-complexity algorithms, such as to achieve an energy and cost-efficient solution that guarantees the services' latency requirements. Due to the significant advantage of operating MEC in heterogeneous networks, we extend the scenario to a network of small-cells with the second-tier edge server being co-located within the macro-cell which can be reached through a wireless backhaul, where we optimize the macro-cell server use along with the other offloading parameters through a proposed customized algorithm based on the Successive Convex Approximation (SCA) technique. Then, given the UAVs' considerable ability in expanding the capabilities of cellular networks and MEC systems, we study the latency and reliability aware optimized positioning and use of UAV-mounted cloudlets for computation offloading through two planning and operational problems while considering tasks redundancy, and propose customized solutions for solving those problems. Finally, given the IRSs' ability to also enhance the channel conditions through the tuning of their passive reflecting elements, we extend the latency and reliability aware study to a scenario of an IRS-aided MEC system considering both a single-user and multi-user OFDMA cases, where we explore the optimized IRSs' use in order to reveal their role in reducing the UEs' offloading consumption energy and saving the network resources, through proposed customized solutions based on the SCA approach and the SDR technique
Machine Learning Algorithms for Provisioning Cloud/Edge Applications
MenciĂłn Internacional en el tĂtulo de doctorReinforcement Learning (RL), in which an agent is trained to make the most
favourable decisions in the long run, is an established technique in artificial intelligence. Its
popularity has increased in the recent past, largely due to the development of deep neural
networks spawning deep reinforcement learning algorithms such as Deep Q-Learning. The
latter have been used to solve previously insurmountable problems, such as playing the
famed game of “Go” that previous algorithms could not. Many such problems suffer the
curse of dimensionality, in which the sheer number of possible states is so overwhelming
that it is impractical to explore every possible option.
While these recent techniques have been successful, they may not be strictly necessary
or practical for some applications such as cloud provisioning. In these situations, the
action space is not as vast and workload data required to train such systems is not
as widely shared, as it is considered commercialy sensitive by the Application Service
Provider (ASP). Given that provisioning decisions evolve over time in sympathy to
incident workloads, they fit into the sequential decision process problem that legacy RL
was designed to solve. However because of the high correlation of time series data, states
are not independent of each other and the legacy Markov Decision Processes (MDPs)
have to be cleverly adapted to create robust provisioning algorithms.
As the first contribution of this thesis, we exploit the knowledge of both the application
and configuration to create an adaptive provisioning system leveraging stationary Markov
distributions. We then develop algorithms that, with neither application nor configuration
knowledge, solve the underlying Markov Decision Process (MDP) to create provisioning
systems. Our Q-Learning algorithms factor in the correlation between states and the
consequent transitions between them to create provisioning systems that do not only
adapt to workloads, but can also exploit similarities between them, thereby reducing
the retraining overhead. Our algorithms also exhibit convergence in fewer learning steps
given that we restructure the state and action spaces to avoid the curse of dimensionality
without the need for the function approximation approach taken by deep Q-Learning
systems.
A crucial use-case of future networks will be the support of low-latency applications
involving highly mobile users. With these in mind, the European Telecommunications Standards Institute (ETSI) has proposed the Multi-access Edge Computing (MEC)
architecture, in which computing capabilities can be located close to the network edge,
where the data is generated. Provisioning for such applications therefore entails migrating
them to the most suitable location on the network edge as the users move. In this thesis,
we also tackle this type of provisioning by considering vehicle platooning or Cooperative
Adaptive Cruise Control (CACC) on the edge. We show that our Q-Learning algorithm
can be adapted to minimize the number of migrations required to effectively run such
an application on MEC hosts, which may also be subject to traffic from other competing
applications.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en IngenierĂa Telemática por la Universidad Carlos III de MadridPresidente: Antonio Fernández Anta.- Secretario: Diego Perino.- Vocal: Ilenia Tinnirell
Computing on the Edge of the Network
Um Systeme der fünften Generation zellularer Kommunikationsnetze (5G) zu ermöglichen, sind Energie effiziente Architekturen erforderlich, die eine zuverlässige Serviceplattform für die Bereitstellung von 5G-Diensten und darüber hinaus bieten können. Device Enhanced Edge Computing ist eine Ableitung des Multi-Access Edge Computing (MEC), das Rechen- und Speicherressourcen direkt auf den Endgeräten bereitstellt. Die Bedeutung dieses Konzepts wird durch die steigenden Anforderungen von rechenintensiven Anwendungen mit extrem niedriger Latenzzeit belegt, die den MEC-Server allein und den drahtlosen Kanal überfordern. Diese Dissertation stellt ein Berechnungs-Auslagerungsframework mit Berücksichtigung von Energie, Mobilität und Anreizen in einem gerätegestützten MEC-System mit mehreren Benutzern und mehreren Aufgaben vor, das die gegenseitige Abhängigkeit der Aufgaben sowie die Latenzanforderungen der Anwendungen berücksichtigt.To enable fifth generation cellular communication network (5G) systems, energy efficient architectures are required that can provide a reliable service platform for the delivery of 5G services and beyond. Device Enhanced Edge Computing is a derivative of Multi-Access Edge Computing (MEC), which provides computing and storage resources directly on the end devices. The importance of this concept is evidenced by the increasing demands of ultra-low latency computationally intensive applications that overwhelm the MEC server alone and the wireless channel. This dissertation presents a computational offloading framework considering energy, mobility and incentives in a multi-user, multi-task device-based MEC system that takes into account task interdependence and application latency requirements
A Bilevel Optimization Approach for Joint Offloading Decision and Resource Allocation in Cooperative Mobile Edge Computing
This paper studies a multi-user cooperative mobile edge computing offloading (CoMECO) system in a multi-user interference environment, in which delay-sensitive tasks may be executed on local devices, cooperative devices, or the primary MEC server. In this system, we jointly optimize the offloading decision and computation resource allocation for minimizing the total energy consumption of all mobile users under the delay constraint. If this problem is solved directly, the offloading decision and computation resource allocation are generally generated separately at the same time. Note, however, that they are closely coupled. Therefore, under this condition, their dependency is not well considered, thus leading to poor performance. We transform this problem into a bilevel optimization problem, in which the offloading decision is generated in the upper level, and then the optimal allocation of computation resources is obtained in the lower level based on the given offloading decision. In this way, the dependency between the offloading decision and computation resource allocation can be fully taken into account. Subsequently, a bilevel optimization approach, called BiJOR, is proposed. In BiJOR, candidate modes are first pruned to reduce the number of infeasible offloading decisions. Afterward, the upper level optimization problem is solved by ant colony system (ACS). Furthermore, a sorting strategy is incorporated into ACS to construct feasible offloading decisions with a higher probability and a local search operator is designed in ACS to accelerate the convergence. For the lower level optimization problem, it is solved by the monotonic optimization method. In addition, BiJOR is extended to deal with a complex scenario with the channel selection. Extensive experiments are carried out to investigate the performance of BiJOR on two sets of instances with up to 400 mobile users. The experimental results demonstrate the effectiveness of BiJOR and the superiority of the CoMECO system
- …