11,329 research outputs found

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    SAMI: Service-Based Arbitrated Multi-Tier Infrastructure for Mobile Cloud Computing

    Get PDF
    Mobile Cloud Computing (MCC) is the state-ofthe- art mobile computing technology aims to alleviate resource poverty of mobile devices. Recently, several approaches and techniques have been proposed to augment mobile devices by leveraging cloud computing. However, long-WAN latency and trust are still two major issues in MCC that hinder its vision. In this paper, we analyze MCC and discuss its issues. We leverage Service Oriented Architecture (SOA) to propose an arbitrated multi-tier infrastructure model named SAMI for MCC. Our architecture consists of three major layers, namely SOA, arbitrator, and infrastructure. The main strength of this architecture is in its multi-tier infrastructure layer which leverages infrastructures from three main sources of Clouds, Mobile Network Operators (MNOs), and MNOs' authorized dealers. On top of the infrastructure layer, an arbitrator layer is designed to classify Services and allocate them the suitable resources based on several metrics such as resource requirement, latency and security. Utilizing SAMI facilitate development and deployment of service-based platform-neutral mobile applications.Comment: 6 full pages, accepted for publication in IEEE MobiCC'12 conference, MobiCC 2012:IEEE Workshop on Mobile Cloud Computing, Beijing, Chin

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)

    Efficient Traffic Management Algorithms for the Core Network using Device-to-Device Communication and Edge Caching

    Get PDF
    Exponentially growing number of communicating devices and the need for faster, more reliable and secure communication are becoming major challenges for current mobile communication architecture. More number of connected devices means more bandwidth and a need for higher Quality of Service (QoS) requirements, which bring new challenges in terms of resource and traffic management. Traffic offload to the edge has been introduced to tackle this demand-explosion that let the core network offload some of the contents to the edge to reduce the traffic congestion. Device-to-Device (D2D) communication and edge caching, has been proposed as promising solutions for offloading data. D2D communication refers to the communication infrastructure where the users in proximity communicate with each other directly. D2D communication improves overall spectral efficiency, however, it introduces additional interference in the system. To enable D2D communication, efficient resource allocation must be introduced in order to minimize the interference in the system and this benefits the system in terms of bandwidth efficiency. In the first part of this thesis, low complexity resource allocation algorithm using stable matching is proposed to optimally assign appropriate uplink resources to the devices in order to minimize interference among D2D and cellular users. Edge caching has recently been introduced as a modification of the caching scheme in the core network, which enables a cellular Base Station (BS) to keep copies of the contents in order to better serve users and enhance Quality of Experience (QoE). However, enabling BSs to cache data on the edge of the network brings new challenges especially on deciding on which and how the contents should be cached. Since users in the same cell may share similar content-needs, we can exploit this temporal-spatial correlation in the favor of caching system which is referred to local content popularity. Content popularity is the most important factor in the caching scheme which helps the BSs to cache appropriate data in order to serve the users more efficiently. In the edge caching scheme, the BS does not know the users request-pattern in advance. To overcome this bottleneck, a content popularity prediction using Markov Decision Process (MDP) is proposed in the second part of this thesis to let the BS know which data should be cached in each time-slot. By using the proposed scheme, core network access request can be significantly reduced and it works better than caching based on historical data in both stable and unstable content popularity

    Millimeter-wave Evolution for 5G Cellular Networks

    Full text link
    Triggered by the explosion of mobile traffic, 5G (5th Generation) cellular network requires evolution to increase the system rate 1000 times higher than the current systems in 10 years. Motivated by this common problem, there are several studies to integrate mm-wave access into current cellular networks as multi-band heterogeneous networks to exploit the ultra-wideband aspect of the mm-wave band. The authors of this paper have proposed comprehensive architecture of cellular networks with mm-wave access, where mm-wave small cell basestations and a conventional macro basestation are connected to Centralized-RAN (C-RAN) to effectively operate the system by enabling power efficient seamless handover as well as centralized resource control including dynamic cell structuring to match the limited coverage of mm-wave access with high traffic user locations via user-plane/control-plane splitting. In this paper, to prove the effectiveness of the proposed 5G cellular networks with mm-wave access, system level simulation is conducted by introducing an expected future traffic model, a measurement based mm-wave propagation model, and a centralized cell association algorithm by exploiting the C-RAN architecture. The numerical results show the effectiveness of the proposed network to realize 1000 times higher system rate than the current network in 10 years which is not achieved by the small cells using commonly considered 3.5 GHz band. Furthermore, the paper also gives latest status of mm-wave devices and regulations to show the feasibility of using mm-wave in the 5G systems.Comment: 17 pages, 12 figures, accepted to be published in IEICE Transactions on Communications. (Mar. 2015
    corecore