31 research outputs found

    Urban Edge Computing

    Get PDF
    The new paradigm of Edge Computing aims to bring resources for storage and computations closer to end devices, alleviating stress on core networks and enabling low-latency mobile applications. While Cloud Computing carries out processing in large centralized data centers, Edge Computing leverages smaller-scale resources— often termed cloudlets—in the vicinity of users. Edge Computing is expected to support novel applications (e.g., mobile augmented reality) and the growing number of connected devices (e.g., from the domain of the Internet of Things). Today, however, we lack essential building blocks for the widespread public availability of Edge Computing, especially in urban environments. This thesis makes several contributions to the understanding, planning, deployment, and operation of Urban Edge Computing infrastructures. We start from a broad perspective by conducting a thorough analysis of the field of Edge Computing, systematizing use cases, discussing potential benefits, and analyzing the potential of Edge Computing for different types of applications. We propose re-using existing physical infrastructures (cellular base stations, WiFi routers, and augmented street lamps) in an urban environment to provide computing resources by upgrading those infrastructures with cloudlets. On the basis of a real-world dataset containing the location of those infrastructures and mobility traces of two mobile applications, we conduct the first large-scale measurement study of urban cloudlet coverage with four different metrics for coverage. After having shown the viability of using those existing infrastructures in an urban environment, we make an algorithmic contribution to the problem of which locations to upgrade with cloudlets, given the heterogeneous nature (with regards to communication range, computing resources, and costs) of the underlying infrastructure. Our proposed solution operates locally on grid cells and is able to adapt to the desired tradeoff between the quality of service and costs for the deployment. Using a simulation experiment on the same mobility traces, we show the effectiveness of our strategy. Existing mechanisms for computation offloading typically achieve loose coupling between the client device and the computing resources by requiring prior transfers of heavyweight execution environments. In light of this deficiency, we propose the concept of store-based microservice onloading, embedded in a flexible runtime environment for Edge Computing. Our runtime environment operates on a microservice-level granularity and those services are made available in a repository—the microservice store—and, upon request from a client, transferred from the store to execution agents at the edge. Furthermore, our Edge Computing runtime is able to share running instances with multiple users and supports the seamless definition and execution of service chains through distributed message queues. Empirical measurements of the implemented approach showed up to 13 times reduction in the end-to-end latency and energy savings of up to 94 % for the mobile device. We provide three contributions regarding strategies and adaptations of an Edge Computing system at runtime. Existing strategies for the placement of data and computation components are not adapted to the requirements of a heterogeneous (e.g., with regards to varying resources) edge environment. The placement of functional parts of an application is a core component of runtime decisions. This problem is computationally hard and has been insufficiently explored for service chains whose topologies are typical for Edge Computing environments (e.g., with regards to the location of data sources and sinks). To this end, we present two classes of heuristics that make the problem more tractable. We implement representatives for each class and show how they substantially reduce the time it takes to find a solution to the placement problem, while introducing only a small optimality gap. The placement of data (e.g., such captured by mobile devices) in Edge Computing should take into account the user’s context and the possible intent of sharing this data. Especially in the case of overloaded networks, e.g., during large-scale events, edge infrastructure can be beneficial for data storage and local dissemination. To address this challenge, we propose vStore, a middleware that—based on a set of rules—decouples applications from pre-defined storage locations in the cloud. We report on results from a field study with a demonstration application, showing that we were able to reduce cloud storage in favor of proximate micro-storage at the edge. As a final contribution, we explore the adaptation possibilities of microservices themselves. We suggest to make microservices adaptable in three dimensions: (i) in the algorithms they use to perform a certain task, (ii) in their parameters, and (iii) in auxiliary data that is required. These adaptations can be leveraged to trade a faster execution time for a decreased quality of the computation (e.g., by producing more inaccurate or partly wrong results). We argue that this is an important building block to be included in an Edge Computing system in view of both constrained resources and strict requirements on computation latencies. We conceptualize an adaptable microservice execution framework and define the problem of choosing the service variant, building upon the design of our previously introduced Edge Computing runtime environment. For a case study, we implement representative examples (e.g., in the field of computer vision and image processing) and outline the practical influence of the abovementioned tradeoff. In conclusion, this dissertation systematically analyzes the field of Urban Edge Computing, thereby contributing to its general understanding. Our contributions provide several important building blocks for the realization of a public Edge Computing infrastructure in an urban environment

    Edge Computing via Dynamic In-network Processing

    No full text

    BigMEC: Scalable Service Migration for Mobile Edge Computing

    No full text
    The proximity of Mobile Edge Computing offers the potential for offloading low latency closed-loop applications from mobile devices. However, to repair decreases in quality of service (QoS), e.g., resulting from user mobility, the placement of service instances must be continually updated – essential for mission critical applications that cannot tolerate decreased QoS, for example virtual reality or networked control systems. This paper presents BigMEC, a decentralized service placement algorithm that achieves scalable, fast, and high-quality placements by making local service migration decisions immediately when a drop in QoS is detected. The algorithm relies on reinforcement learning to adapt to unknown scenarios and to approximate long-term optimal placement updates by taking future transition costs into account. BigMEC limits each decentralized migration decision to nearby edge sites. Thus, decision computation times are independent of the number of nodes in the network and well below 10ms in our experimental setup. Our ablation study validates that, using its scalable approach to decentralized resource conflict resolution, BigMEC quickly approaches optimal placement with increasing local view size, and that it can reliably learn to approximate long-term optimal migration decisions, given only a black-box optimization objective

    Fog Computing: Current Research and Future Challenges

    No full text
    &nbsp; Acknowledging shortcomings of cloud computing, recent research efforts have been devoted to fog computing.&nbsp; Motivated by a rapidly increasing number of devices at the extreme edge of the network (e.g., mobile phones, connected cars, IoT sensors) that imply the need for timely and local processing, fog computing offers a promising solution to move computational capabilities closer to the data generated by those devices.&nbsp; In this vision paper, we summarize these current research efforts, describe applications where fog computing is beneficial and identify future challenges that remain open to bring fog computing to a breakthrough.<br /

    On Scalable In-Network Operator Placement for Edge Computing

    No full text

    VirtualStack: Green High Performance Network Protocol Processing Leveraging FPGAs

    No full text
    In times of cloud services and IoT, network protocol processing is a big part of the CPU utilization today. Foong et al. proposed the rule of thumb for TCP, that a single-core CPU needs about 1 Hz clock frequency to produce 1 bit/s worth of TCP data packets. Unfortunately, CPU speed has stagnated around 5 GHz in recent years resulting in a upper limit of 5 GBit/s throughput with single-threaded network processing. Further, CPU featuring such high clock rates (e.g., Intel Core i7-8086K) have rated TDP around 95 W, resulting in very high power consumption for high throughput situations. Meanwhile, industry offers some hardware acceleration for TCP as part of their server network cards, to relief the server CPUs and increase the energy efficiency. However this is just a small support as state and management still needs the CPU of the host system. In this paper, we present an approach based on field programmable gate arrays (FPGA) to not only free up CPU cycles but provide a scaleable and energy efficient concept to fully utilize high-speed network interfaces, whereby maintaining the flexibility of software solutions. For our evaluation, we utilized the NetFPGA Sume, proofing to achieve the linerate of connected SFP+ ports while power consumption stays below 6 W. By leveraging the network protocol virtualization, the hardware acceleration approach is not only deploy-able but stays flexible enough to adapt new networking paradigms quickly

    Mastering Music Instruments through Technology in Solo Learning Sessions

    No full text
    Mastering a musical instrument requires time-consuming practice even if students are guided by an expert. In the overwhelming majority of the time, the students practice by themselves and traditional teaching materials, such as videos or textbooks, lack interaction and guidance possibilities. Adequate feedback, however, is highly important to prevent the acquirement of wrong motions and to avoid potential health problems. In this paper, we envision musical instruments as smart objects to enhance solo learning sessions. We give an overview of existing approaches and setups and discuss them. Finally, we conclude with recommendations for designing smart and augmented musical instruments for learning purposes
    corecore