241 research outputs found

    Enabling 5G Edge Native Applications

    Get PDF

    Simple Energy Aware Scheduler: An Empirical Evaluation

    Get PDF
    Mobile devices have evolved from single purpose devices, such as mobile phone, into general purpose multi-core computers with considerable unused capabilities. Therefore, several researchers have considered harnessing the power of these battery-powered devices for distributed computing. Despite their ever-growing capabilities, using battery as power source for mobile devices represents a major challenge for applying traditional distributed computing techniques. Particularly, researchers aimed at using mobile devices as resources for executing computationally intensive task. Different job scheduling algorithms were proposed with this aim, but many of them require information that is unavailable or difficult to obtain in real-life environments, such as how much energy would require a job to be finished. In this context, Simple Energy Aware Scheduler (SEAS) is a scheduling technique for computational intensive Mobile Grids that only require easily accessible information. It was proposed in 2010 and it has been the base for a range of research work. Despite being described as easily implementable in real-life scenarios, SEAS and other SEAS-improvements works have always been evaluated using simulations. In this work, we present a distributed computing platform for mobile devices that support SEAS and empirical evaluation of the SEAS scheduler. This evaluation followed the methodology of the original SEAS evaluation, in which Random and Round Robin schedulers were used as baselines. Although the original evaluation was performed by simulation using notebooks profile instead of smartphones and tablets, results confirms that SEAS outperforms the baseline schedulers.Fil: Pérez Campos, Ana Bella. Universidad Nacional del Centro de la Provincia de Buenos Aires. Facultad de Ciencias Exactas; ArgentinaFil: Rodriguez, Juan Manuel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; ArgentinaFil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; Argentin

    A Markov Decision Process Solution for Energy-Saving Network Selection and Computation Offloading in Vehicular Networks

    Get PDF
    Vehicular Edge Computing (VEC) enables the integration of edge computing facilities in vehicular networks (VNs), allowing data-intensive and latency-critical applications and services to end-users. Though VEC brings several benefits in terms of reduced task computation time, energy consumption, backhaul link congestion, and data security risks, VEC servers are often resource-constrained. Therefore, the selection of proper edge nodes and the amount of data to be offloaded becomes important for having VEC process benefits. However, with the involvement of high mobility vehicles and dynamically changing vehicular environments, proper VEC node selection and data offloading can be challenging. In this work, we consider a joint network selection and computation offloading problem over a VEC environment for minimizing the overall latency and energy consumption during vehicular task processing, considering both user and infrastructure side energy-saving mechanisms. We have modeled the problem as a sequential decision-making problem and incorporated it in a Markov Decision Process (MDP). Numerous vehicular scenarios are considered based upon the users' positions, the states of the surrounding environment, and the available resources for creating a better environment model for the MDP analysis. We use a value iteration algorithm for finding an optimal policy of the MDPs over an uncertain vehicular environment. Simulation results show that the proposed approaches improve the network performance in terms of latency and consumed energy

    An implementation of task processing on 4G-based mobile-edge computing systems

    Get PDF
    Mobile Edge Computing (MEC) is a new technology that facilitates low-latency cloud services to mobile devices (MDs) by pushing mobile computing, storage and network control to the network edge (closer to MDs), thereby prolonging the battery lifetime of MDs. One of the main objectives of MEC is to reduce latency and permit delay-sensitive applications in 4G and in the future, 5G communications. To achieve this feat, MEC aims to build up a computing platform by deploying edge servers (ESs) on the network edge. There is, therefore, a push to test the MEC performance on existinMobile Edge Computing (MEC) is a new technology that facilitates low-latency cloud services to mobile devices (MDs) by pushing mobile computing, storage and network control to the network edge, thereby prolonging the battery lifetime of MDs. Besides, MEC aims to reduce latency and permit delay-sensitive applications in 4G communications. There is, therefore, a push to test MEC performance on existing cellular systems. With the recently available mobile platform for academia SINET, NII can now connect MDs to ESs through 4G. This project focuses on the implementation of a physical 4G-based MEC System for task offloading, in which with the goal of achieving face detection, MD partially offload tasks to the ES under the instructions dictated by the offloading algorithms. Accordingly, the objectives of this thesis are to prove the efficiency of LTE based MEC systems in the real world focusing on its performance in terms of latency and battery consumption

    Edge/Fog Computing Technologies for IoT Infrastructure

    Get PDF
    The prevalence of smart devices and cloud computing has led to an explosion in the amount of data generated by IoT devices. Moreover, emerging IoT applications, such as augmented and virtual reality (AR/VR), intelligent transportation systems, and smart factories require ultra-low latency for data communication and processing. Fog/edge computing is a new computing paradigm where fully distributed fog/edge nodes located nearby end devices provide computing resources. By analyzing, filtering, and processing at local fog/edge resources instead of transferring tremendous data to the centralized cloud servers, fog/edge computing can reduce the processing delay and network traffic significantly. With these advantages, fog/edge computing is expected to be one of the key enabling technologies for building the IoT infrastructure. Aiming to explore the recent research and development on fog/edge computing technologies for building an IoT infrastructure, this book collected 10 articles. The selected articles cover diverse topics such as resource management, service provisioning, task offloading and scheduling, container orchestration, and security on edge/fog computing infrastructure, which can help to grasp recent trends, as well as state-of-the-art algorithms of fog/edge computing technologies

    Energy and performance-optimized scheduling of tasks in distributed cloud and edge computing systems

    Get PDF
    Infrastructure resources in distributed cloud data centers (CDCs) are shared by heterogeneous applications in a high-performance and cost-effective way. Edge computing has emerged as a new paradigm to provide access to computing capacities in end devices. Yet it suffers from such problems as load imbalance, long scheduling time, and limited power of its edge nodes. Therefore, intelligent task scheduling in CDCs and edge nodes is critically important to construct energy-efficient cloud and edge computing systems. Current approaches cannot smartly minimize the total cost of CDCs, maximize their profit and improve quality of service (QoS) of tasks because of aperiodic arrival and heterogeneity of tasks. This dissertation proposes a class of energy and performance-optimized scheduling algorithms built on top of several intelligent optimization algorithms. This dissertation includes two parts, including background work, i.e., Chapters 3–6, and new contributions, i.e., Chapters 7–11. 1) Background work of this dissertation. Chapter 3 proposes a spatial task scheduling and resource optimization method to minimize the total cost of CDCs where bandwidth prices of Internet service providers, power grid prices, and renewable energy all vary with locations. Chapter 4 presents a geography-aware task scheduling approach by considering spatial variations in CDCs to maximize the profit of their providers by intelligently scheduling tasks. Chapter 5 presents a spatio-temporal task scheduling algorithm to minimize energy cost by scheduling heterogeneous tasks among CDCs while meeting their delay constraints. Chapter 6 gives a temporal scheduling algorithm considering temporal variations of revenue, electricity prices, green energy and prices of public clouds. 2) Contributions of this dissertation. Chapter 7 proposes a multi-objective optimization method for CDCs to maximize their profit, and minimize the average loss possibility of tasks by determining task allocation among Internet service providers, and task service rates of each CDC. A simulated annealing-based bi-objective differential evolution algorithm is proposed to obtain an approximate Pareto optimal set. A knee solution is selected to schedule tasks in a high-profit and high-quality-of-service way. Chapter 8 formulates a bi-objective constrained optimization problem, and designs a novel optimization method to cope with energy cost reduction and QoS improvement. It jointly minimizes both energy cost of CDCs, and average response time of all tasks by intelligently allocating tasks among CDCs and changing task service rate of each CDC. Chapter 9 formulates a constrained bi-objective optimization problem for joint optimization of revenue and energy cost of CDCs. It is solved with an improved multi-objective evolutionary algorithm based on decomposition. It determines a high-quality trade-off between revenue maximization and energy cost minimization by considering CDCs’ spatial differences in energy cost while meeting tasks’ delay constraints. Chapter 10 proposes a simulated annealing-based bees algorithm to find a close-to-optimal solution. Then, a fine-grained spatial task scheduling algorithm is designed to minimize energy cost of CDCs by allocating tasks among multiple green clouds, and specifies running speeds of their servers. Chapter 11 proposes a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are met in cloud-edge computing systems. A single-objective constrained optimization problem is solved by a proposed simulated annealing-based migrating birds optimization. This dissertation evaluates these algorithms, models and software with real-life data and proves that they improve scheduling precision and cost-effectiveness of distributed cloud and edge computing systems

    Extending the battery life of mobile device by computation offloading

    Get PDF
    Doctor of PhilosophyComputing and Information SciencesDaniel A. AndresenThe need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance

    Resource Management in Multi-Access Edge Computing (MEC)

    Get PDF
    This PhD thesis investigates the effective ways of managing the resources of a Multi-Access Edge Computing Platform (MEC) in 5th Generation Mobile Communication (5G) networks. The main characteristics of MEC include distributed nature, proximity to users, and high availability. Based on these key features, solutions have been proposed for effective resource management. In this research, two aspects of resource management in MEC have been addressed. They are the computational resource and the caching resource which corresponds to the services provided by the MEC. MEC is a new 5G enabling technology proposed to reduce latency by bringing cloud computing capability closer to end-user Internet of Things (IoT) and mobile devices. MEC would support latency-critical user applications such as driverless cars and e-health. These applications will depend on resources and services provided by the MEC. However, MEC has limited computational and storage resources compared to the cloud. Therefore, it is important to ensure a reliable MEC network communication during resource provisioning by eradicating the chances of deadlock. Deadlock may occur due to a huge number of devices contending for a limited amount of resources if adequate measures are not put in place. It is crucial to eradicate deadlock while scheduling and provisioning resources on MEC to achieve a highly reliable and readily available system to support latency-critical applications. In this research, a deadlock avoidance resource provisioning algorithm has been proposed for industrial IoT devices using MEC platforms to ensure higher reliability of network interactions. The proposed scheme incorporates Banker’s resource-request algorithm using Software Defined Networking (SDN) to reduce communication overhead. Simulation and experimental results have shown that system deadlock can be prevented by applying the proposed algorithm which ultimately leads to a more reliable network interaction between mobile stations and MEC platforms. Additionally, this research explores the use of MEC as a caching platform as it is proclaimed as a key technology for reducing service processing delays in 5G networks. Caching on MEC decreases service latency and improve data content access by allowing direct content delivery through the edge without fetching data from the remote server. Caching on MEC is also deemed as an effective approach that guarantees more reachability due to proximity to endusers. In this regard, a novel hybrid content caching algorithm has been proposed for MEC platforms to increase their caching efficiency. The proposed algorithm is a unification of a modified Belady’s algorithm and a distributed cooperative caching algorithm to improve data access while reducing latency. A polynomial fit algorithm with Lagrange interpolation is employed to predict future request references for Belady’s algorithm. Experimental results show that the proposed algorithm obtains 4% more cache hits due to its selective caching approach when compared with case study algorithms. Results also show that the use of a cooperative algorithm can improve the total cache hits up to 80%. Furthermore, this thesis has also explored another predictive caching scheme to further improve caching efficiency. The motivation was to investigate another predictive caching approach as an improvement to the formal. A Predictive Collaborative Replacement (PCR) caching framework has been proposed as a result which consists of three schemes. Each of the schemes addresses a particular problem. The proactive predictive scheme has been proposed to address the problem of continuous change in cache popularity trends. The collaborative scheme addresses the problem of cache redundancy in the collaborative space. Finally, the replacement scheme is a solution to evict cold cache blocks and increase hit ratio. Simulation experiment has shown that the replacement scheme achieves 3% more cache hits than existing replacement algorithms such as Least Recently Used, Multi Queue and Frequency-based replacement. PCR algorithm has been tested using a real dataset (MovieLens20M dataset) and compared with an existing contemporary predictive algorithm. Results show that PCR performs better with a 25% increase in hit ratio and a 10% CPU utilization overhead
    • …
    corecore