972 research outputs found

    On Uncoordinated Service Placement in Edge-Clouds

    Get PDF
    Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. ISPs have the opportunity to deploy private edge-clouds in their infrastructure to generate additional revenue by providing ultra-low latency applications to local users. We envision a rapid increase in the number of such applications for “edge” networks in the near future with virtual/augmented reality (VR/AR), networked gaming, wearable cognitive assistance, autonomous driving and IoT analytics having already been proposed for edge- clouds instead of the central clouds to improve performance. This raises new challenges as the complexity of the resource allocation problem for multiple services with latency deadlines (i.e., which service to place at which node of the edge-cloud in order to satisfy the latency constraints) becomes significant. In this paper, we propose a set of practical, uncoordinated strategies for service placement in edge-clouds. Through extensive simulations using both synthetic and real-world trace data, we demonstrate that uncoordinated strategies can perform comparatively well with the optimal placement solution, which satisfies the maximum amount of user requests

    ClouDiA: a deployment advisor for public clouds

    Get PDF
    An increasing number of distributed data-driven applications are moving into shared public clouds. By sharing resources and oper-ating at scale, public clouds promise higher utilization and lower costs than private clusters. To achieve high utilization, however, cloud providers inevitably allocate virtual machine instances non-contiguously, i.e., instances of a given application may end up in physically distant machines in the cloud. This allocation strategy can lead to large differences in average latency between instances. For a large class of applications, this difference can result in signif-icant performance degradation, unless care is taken in how applica-tion components are mapped to instances. In this paper, we propose ClouDiA, a general deployment ad-visor that selects application node deployments minimizing either (i) the largest latency between application nodes, or (ii) the longest critical path among all application nodes. ClouDiA employs mixed-integer programming and constraint programming techniques to ef-ficiently search the space of possible mappings of application nodes to instances. Through experiments with synthetic and real applica-tions in Amazon EC2, we show that our techniques yield a 15 % to 55 % reduction in time-to-solution or service response time, without any need for modifying application code. 1

    Named Functions at the Edge

    Get PDF
    As end-user and edge-network devices are becoming ever more powerful, they are producing ever increasing amounts of data. Pulling all this data into the cloud for processing is impossible, not only due to its enormous volume, but also due to the stringent latency requirements of many applications. Instead, we argue that end-user and edge-network devices should collectively form edge computing swarms and complement the cloud with their storage and processing resources. This shift from centralized to edge clouds has the potential to open new horizons for application development, supporting new low-latency services and, ultimately, creating new markets for storage and processing resources. To realize this vision, we propose Named Functions at the Edge (NFE), a platform where functions can i) be identified through a routable name, ii) be requested and moved (as data objects) to process data on demand at edge nodes, iii) pull raw or anonymized data from sensors and devices, iv) securely and privately return their results to the invoker and v) compensate each party for use of their data, storage, communication or computing resources via tracking and accountability mechanisms. We use an emergency evacuation application to motivate the need for NFE and demonstrate its potential

    V-Edge: Virtual Edge Computing as an Enabler for Novel Microservices and Cooperative Computing

    Get PDF
    As we move from 5G to 6G, edge computing is one of the concepts that needs revisiting. Its core idea is still intriguing: instead of sending all data and tasks from an end user's device to the cloud, possibly covering thousands of kilometers and introducing delays that are just owed to limited propagation speed, edge servers deployed in close proximity to the user, e.g., at some 5G gNB, serve as proxy for the cloud. Yet this promising idea is hampered by the limited availability of such edge servers. In this paper, we discuss a way forward, namely the virtual edge computing (V-Edge) concept. V-Edge bridges the gap between cloud, edge, and fog by virtualizing all available resources including the end users' devices and making these resources widely available using well-defined interfaces. V-Edge also acts as an enabler for novel microservices as well as cooperative computing solutions. We introduce the general V-Edge architecture and we characterize some of the key research challenges to overcome, in order to enable wide-spread and even more powerful edge services

    V-Edge: Virtual Edge Computing as an Enabler for Novel Microservices and Cooperative Computing

    Get PDF
    As we move from 5G to 6G, edge computing is one of the concepts that needs revisiting. Its core idea is still intriguing: Instead of sending all data and tasks from an end user's device to the cloud, possibly covering thousands of kilometers and introducing delays lower-bounded by propagation speed, edge servers deployed in close proximity to the user (e.g., at some base station) serve as proxy for the cloud. This is particularly interesting for upcoming machine-learning-based intelligent services, which require substantial computational and networking performance for continuous model training. However, this promising idea is hampered by the limited number of such edge servers. In this article, we discuss a way forward, namely the V-Edge concept. V-Edge helps bridge the gap between cloud, edge, and fog by virtualizing all available resources including the end users' devices and making these resources widely available. Thus, V-Edge acts as an enabler for novel microservices as well as cooperative computing solutions in next-generation networks. We introduce the general V-Edge architecture, and we characterize some of the key research challenges to overcome in order to enable wide-spread and intelligent edge services

    Multitenancy - Security Risks and Countermeasures

    Get PDF
    Security within the cloud is of paramount importance as the interest and indeed utilization of cloud computing increase. Multitenancy in particular introduces unique security risks to cloud computing as a result of more than one tenant utilizing the same physical computer hardware and sharing the same software and data. The purpose of this paper is to explore the specific risks in cloud computing due to Multitenancy and the measures that can be taken to mitigate those risks.Security within the cloud is of paramount importance as the interest and indeed utilization of cloud computing increase. Multitenancy in particular introduces unique security risks to cloud computing as a result of more than one tenant utilizing the same physical computer hardware and sharing the same software and data. The purpose of this paper is to explore the specific risks in cloud computing due to Multitenancy and the measures that can be taken to mitigate those risks

    Store Edge Networked Data (SEND): A Data and Performance Driven Edge Storage Framework

    Get PDF
    The number of devices that the edge of the Internet accommodates and the volume of the data these devices generate are expected to grow dramatically in the years to come. As a result, managing and processing such massive data amounts at the edge becomes a vital issue. This paper proposes "Store Edge Networked Data" (SEND), a novel framework for in-network storage management realized through data repositories deployed at the network edge. SEND considers different criteria (e.g., data popularity, data proximity from processing functions at the edge) to intelligently place different categories of raw and processed data at the edge based on system-wide identifiers of the data context, called labels. We implement a data repository prototype on top of the Google file system, which we evaluate based on real-world datasets of images and Internet of Things device measurements. To scale up our experiments, we perform a network simulation study based on synthetic and real-world datasets evaluating the performance and trade-offs of the SEND design as a whole. Our results demonstrate that SEND achieves data insertion times of 0.06ms-0.9ms, data lookup times of 0.5ms-5.3ms, and on-time completion of up to 92% of user requests for the retrieval of raw and processed data

    Resource Provisioning and Allocation in Function-as-a-Service Edge-Clouds

    Get PDF
    Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. Unlike back-end cloud systems which consolidate their resources in a centralized data center location with virtually unlimited capacity, edge-clouds comprise distributed resources at various computation spots, each with very limited capacity. In this paper, we consider Function-as-a-Service (FaaS) edge-clouds where application providers deploy their latency-critical functions that process user requests with strict response time deadlines. In this setting, we investigate the problem of resource provisioning and allocation. After formulating the optimal solution, we propose resource allocation and provisioning algorithms across the spectrum of fully-centralized to fully-decentralized. We evaluate the performance of these algorithms in terms of their ability to utilize CPU resources and meet request deadlines under various system parameters. Our results indicate that practical decentralized strategies, which require no coordination among computation spots, achieve performance that is close to the optimal fully-centralized strategy with coordination overheads
    • …
    corecore