105 research outputs found

    In-Network View Synthesis for Interactive Multiview Video Systems

    Get PDF
    To enable Interactive multiview video systems with a minimum view-switching delay, multiple camera views are sent to the users, which are used as reference images to synthesize additional virtual views via depth-image-based rendering. In practice, bandwidth constraints may however restrict the number of reference views sent to clients per time unit, which may in turn limit the quality of the synthesized viewpoints. We argue that the reference view selection should ideally be performed close to the users, and we study the problem of in-network reference view synthesis such that the navigation quality is maximized at the clients. We consider a distributed cloud network architecture where data stored in a main cloud is delivered to end users with the help of cloudlets, i.e., resource-rich proxies close to the users. In order to satisfy last-hop bandwidth constraints from the cloudlet to the users, a cloudlet re-samples viewpoints of the 3D scene into a discrete set of views (combination of received camera views and virtual views synthesized) to be used as reference for the synthesis of additional virtual views at the client. This in-network synthesis leads to better viewpoint sampling given a bandwidth constraint compared to simple selection of camera views, but it may however carry a distortion penalty in the cloudlet-synthesized reference views. We therefore cast a new reference view selection problem where the best subset of views is defined as the one minimizing the distortion over a view navigation window defined by the user under some transmission bandwidth constraints. We show that the view selection problem is NP-hard, and propose an effective polynomial time algorithm using dynamic programming to solve the optimization problem. Simulation results finally confirm the performance gain offered by virtual view synthesis in the network

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Resource allocation in mobile edge cloud computing for data-intensive applications

    Get PDF
    Rapid advancement in the mobile telecommunications industry has motivated the development of mobile applications in a wide range of social and scientific domains. However, mobile computing (MC) platforms still have several constraints, such as limited computation resources, short battery life and high sensitivity to network capabilities. In order to overcome the limitations of mobile computing and benefit from the huge advancement in mobile telecommunications and the rapid revolution of distributed resources, mobile-aware computing models, such as mobile cloud computing (MCC) and mobile edge computing (MEC) have been proposed. The main problem is to decide on an application execution plan while satisfying quality of service (QoS) requirements and the current status of system networking and device energy. However, the role of application data in offloading optimisation has not been studied thoroughly, particularly with respect to how data size and distribution impact application offloading. This problem can be referred to as data-intensive mobile application offloading optimisation. To address this problem, this thesis presents novel optimisation frameworks, techniques and algorithms for mobile application resource allocation in mobile-aware computing environments. These frameworks and techniques are proposed to provide optimised solutions to schedule data intensive mobile applications. Experimental results show the ability of the proposed tools in optimising the scheduling and the execution of data intensive applications on various computing environments to meet application QoS requirements. Furthermore, the results clearly stated the significant contribution of the data size parameter on scheduling the execution of mobile applications. In addition, the thesis provides an analytical investigation of mobile-aware computing environments for a certain mobile application type. The investigation provides performance analysis to help users decide on target computation resources based on application structure, input data, and mobile network status

    Computation Offloading and Task Scheduling on Network Edge

    Get PDF
    The Fifth-Generation (5G) networks facilitate the evolution of communication systems and accelerate a revolution in the Information Technology (IT) field. In the 5G era, wireless networks are anticipated to provide connectivity for billions of Mobile User Devices (MUDs) around the world and to support a variety of innovative use cases, such as autonomous driving, ubiquitous Internet of Things (IoT), and Internet of Vehicles (IoV). The novel use cases, however, usually incorporate compute-intensive applications, which generate enormous computing service demands with diverse and stringent service requirements. In particular, autonomous driving calls for prompt data processing for the safety-related applications, IoT nodes deployed in remote areas need energy-efficient computing given limited on-board energy, and vehicles require low-latency computing for IoV applications in a highly dynamic network. To support the emerging computing service demands, Mobile Edge Computing (MEC), as a cutting-edge technology in 5G, utilizes computing resources on network edge to provide computing services for MUDs within a radio access network. The primary benefits of MEC can be elaborated from two perspectives. From the perspective of MUDs, MEC enables low-latency and energy-efficient computing by allowing MUDs to offload their computation tasks to proximal edge servers, which are installed in access points such as cellular base stations, Road-Side Units (RSUs), and Unmanned Aerial Vehicles (UAVs). On the other hand, from the perspective of network operators, MEC allows a large amount of computing data to be processed on network edge, thereby alleviating backhaul congestion. {MEC is a promising technology to support computing demands for the novel 5G applications within the RAN. The interesting issue is to maximize the computation capability of network edge to meet the diverse service requirements arising from the applications in dynamic network environments. However, the main technical challenges are: 1) how an edge server schedules its limited computing resources to optimize the Quality-of-Experience (QoE) in autonomous driving; 2) how the computation loads are balanced between the edge server and IoT nodes in computation loads to enable energy-efficient computing service provisioning; and 3) how multiple edge servers coordinate their computing resources to enable seamless and reliable computing services for high-mobility vehicles in IoV. In this thesis, we develop efficient computing resource management strategies for MEC, including computation offloading and task scheduling, to address the above three technical challenges. First, we study computation task scheduling to support real-time applications, such as localization and obstacle avoidance, for autonomous driving. In our considered scenario, autonomous vehicles periodically sense the environment, offload sensor data to an edge server for processing, and receive computing results from the edge server. Due to mobility and computing latency, a vehicle travels a certain distance between the instant of offloading its sensor data and the instant of receiving the computing result. Our objective is to design a scheduling scheme for the edge server to minimize the above traveled distance of vehicles. The idea is to determine the processing order according to the individual vehicle mobility and computation capability of the edge server. We formulate a Restless Multi-Armed Bandit (RMAB) problem, design a Whittle index-based stochastic scheduling scheme, and determine the index using a Deep Reinforcement Learning (DRL) method. The proposed scheduling scheme can avoid the time-consuming policy exploration common in DRL scheduling approaches and makes effectual decisions with low complexity. Extensive simulation results demonstrate that, with the proposed index-based scheme, the edge server can deliver computing results to the vehicles promptly while adapting to time-variant vehicle mobility. Second, we study energy-efficient computation offloading and task scheduling for an edge server while provisioning computing services {for IoT nodes in remote areas}. In the considered scenario, a UAV is equipped with computing resources and plays the role of an aerial edge server to collect and process the computation tasks offloaded by ground MUDs. Given the service requirements of MUDs, we aim to maximize UAV energy efficiency by jointly optimizing the UAV trajectory, the user transmit power, and computation task scheduling. The resulting optimization problem corresponds to nonconvex fractional programming, and the Dinkelbach algorithm and the Successive Convex Approximation (SCA) technique are adopted to solve it. Furthermore, we decompose the problem into multiple subproblems for distributed and parallel problem solving. To cope with the case when the knowledge of user mobility is limited, we apply a spatial distribution estimation technique to predict the location of ground users so that the proposed approach can still be valid. Simulation results demonstrate the effectiveness of the proposed approach to maximize the energy efficiency of the UAV. Third, we study collaboration among multiple edge servers in computation offloading and task scheduling to support computing services {in IoV}. In the considered scenario, vehicles traverse the coverage of edge servers and offload their tasks to their proximal edge servers. We develop a collaborative edge computing framework to reduce computing service latency and alleviate computing service interruption due to the high mobility of vehicles: 1) a Task Partition and Scheduling Algorithm (TPSA) is proposed to schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy; and 2) an artificial intelligence-based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A DRL technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. With the developed framework, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal computation task scheduling and edge server selection. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance. In summary, we investigate computing resource management to optimize QoE of MUDs in the coverage of an edge server, to improve energy efficiency for an aerial edge server while provisioning computing services, and to coordinate computing resources among edge servers for supporting MUDs with high mobility. The proposed approaches and theoretical results contribute to computing resource management for MEC in 5G and beyond
    • …
    corecore