20,961 research outputs found

    Mobile Edge Computing: From Task Load Balancing to Real-World Mobile Sensing Applications

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.With the rapid development of mobile computing technologies and the Internet of Things, there has been an increasing rise of capable and affordable edge devices that can provide in-proximity computing services for mobile users. Moreover, a massive amount of mobile edge computing (MEC) systems have been developed to enhance various aspects of people's daily life, including big mobile data, healthcare, intelligent transportation, connected vehicles, smart building control, indoor localization, and many others. Although MEC systems can provide mobile users with swift computing services and conserve devices' energy by processing their tasks, we confront significant research challenges in several perspectives, including resource management, task scheduling, service placement, application development, etc. For instance, computation offloading in MEC would significantly benefit mobile users and bring new challenges for service providers. Unbalance and inefficiency are the two challenging issues when making decisions on computation offloading among MEC servers. On the other hand, it is unprecedented to design and implement novel and practical applications for edge-assisted mobile computing and mobile sensing. The power of mobile edge computing has not been fully unleashed yet from theoretical and practical perspectives. In this thesis, to address the above challenges from both theoretical and practical perspectives, we present four research studies within the scope of MEC, including load balancing of computation task loading, fairness in workload scheduling, edge-assisted wireless sensing, and cross-domain learning for real-world edge sensing. The thesis consists of two major parts as follows. In the first part of this thesis, we investigate load balancing issues of computation offloading in MEC. First, we present a novel collaborative computation offloading mechanism for balanced mobile cloudlet networks. Then, a fairness-oriented task offloading scheme for IoT applications of MEC is further devised. The proposed computation offloading mechanisms incorporate algorithmic theories with the random mobility and opportunistic encounters of edge servers, thereby processing computation offloading for load balancing in a distributed manner. Through rigorous theoretical analyses and extensive simulations with real-world trace datasets, the proposed methods have demonstrated desirable results of significantly balanced computation offloading, showing great potential to be applied in practice. In the second part of this thesis, beyond theoretical perspectives, we further investigate two novel implementations with mobile edge computing, including edge-assisted wireless crowdsensing for outdoor RSS maps, and urban traffic prediction with cross-domain learning. We implement our ideas with the iMap system and the BuildSenSys system, and further demonstrate demos with real-world datasets to show the effectiveness of proposed applications. We believe that the above algorithms and applications hold great promise for future technological advancement in mobile edge computing

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Spatio-temporal Edge Service Placement: A Bandit Learning Approach

    Full text link
    Shared edge computing platforms deployed at the radio access network are expected to significantly improve quality of service delivered by Application Service Providers (ASPs) in a flexible and economic way. However, placing edge service in every possible edge site by an ASP is practically infeasible due to the ASP's prohibitive budget requirement. In this paper, we investigate the edge service placement problem of an ASP under a limited budget, where the ASP dynamically rents computing/storage resources in edge sites to host its applications in close proximity to end users. Since the benefit of placing edge service in a specific site is usually unknown to the ASP a priori, optimal placement decisions must be made while learning this benefit. We pose this problem as a novel combinatorial contextual bandit learning problem. It is "combinatorial" because only a limited number of edge sites can be rented to provide the edge service given the ASP's budget. It is "contextual" because we utilize user context information to enable finer-grained learning and decision making. To solve this problem and optimize the edge computing performance, we propose SEEN, a Spatial-temporal Edge sErvice placemeNt algorithm. Furthermore, SEEN is extended to scenarios with overlapping service coverage by incorporating a disjunctively constrained knapsack problem. In both cases, we prove that our algorithm achieves a sublinear regret bound when it is compared to an oracle algorithm that knows the exact benefit information. Simulations are carried out on a real-world dataset, whose results show that SEEN significantly outperforms benchmark solutions

    A Time-driven Data Placement Strategy for a Scientific Workflow Combining Edge Computing and Cloud Computing

    Full text link
    Compared to traditional distributed computing environments such as grids, cloud computing provides a more cost-effective way to deploy scientific workflows. Each task of a scientific workflow requires several large datasets that are located in different datacenters from the cloud computing environment, resulting in serious data transmission delays. Edge computing reduces the data transmission delays and supports the fixed storing manner for scientific workflow private datasets, but there is a bottleneck in its storage capacity. It is a challenge to combine the advantages of both edge computing and cloud computing to rationalize the data placement of scientific workflow, and optimize the data transmission time across different datacenters. Traditional data placement strategies maintain load balancing with a given number of datacenters, which results in a large data transmission time. In this study, a self-adaptive discrete particle swarm optimization algorithm with genetic algorithm operators (GA-DPSO) was proposed to optimize the data transmission time when placing data for a scientific workflow. This approach considered the characteristics of data placement combining edge computing and cloud computing. In addition, it considered the impact factors impacting transmission delay, such as the band-width between datacenters, the number of edge datacenters, and the storage capacity of edge datacenters. The crossover operator and mutation operator of the genetic algorithm were adopted to avoid the premature convergence of the traditional particle swarm optimization algorithm, which enhanced the diversity of population evolution and effectively reduced the data transmission time. The experimental results show that the data placement strategy based on GA-DPSO can effectively reduce the data transmission time during workflow execution combining edge computing and cloud computing
    corecore