1,453 research outputs found

    Real-Time Containers: A Survey

    Get PDF
    Container-based virtualization has gained a significant importance in a deployment of software applications in cloud-based environments. The technology fully relies on operating system features and does not require a virtualization layer (hypervisor) that introduces a performance degradation. Container-based virtualization allows to co-locate multiple isolated containers on a single computation node as well as to decompose an application into multiple containers distributed among several hosts (e.g., in fog computing layer). Such a technology seems very promising in other domains as well, e.g., in industrial automation, automotive, and aviation industry where mixed criticality containerized applications from various vendors can be co-located on shared resources. However, such industrial domains often require real-time behavior (i.e, a capability to meet predefined deadlines). These capabilities are not fully supported by the container-based virtualization yet. In this work, we provide a systematic literature survey study that summarizes the effort of the research community on bringing real-time properties in container-based virtualization. We categorize existing work into main research areas and identify possible immature points of the technology

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Understanding the Computational Requirements of Virtualized Baseband Units using a Programmable Cloud Radio Access Network Testbed

    Full text link
    Cloud Radio Access Network (C-RAN) is emerging as a transformative architecture for the next generation of mobile cellular networks. In C-RAN, the Baseband Unit (BBU) is decoupled from the Base Station (BS) and consolidated in a centralized processing center. While the potential benefits of C-RAN have been studied extensively from the theoretical perspective, there are only a few works that address the system implementation issues and characterize the computational requirements of the virtualized BBU. In this paper, a programmable C-RAN testbed is presented where the BBU is virtualized using the OpenAirInterface (OAI) software platform, and the eNodeB and User Equipment (UEs) are implemented using USRP boards. Extensive experiments have been performed in a FDD downlink LTE emulation system to characterize the performance and computing resource consumption of the BBU under various conditions. It is shown that the processing time and CPU utilization of the BBU increase with the channel resources and with the Modulation and Coding Scheme (MCS) index, and that the CPU utilization percentage can be well approximated as a linear increasing function of the maximum downlink data rate. These results provide real-world insights into the characteristics of the BBU in terms of computing resource and power consumption, which may serve as inputs for the design of efficient resource-provisioning and allocation strategies in C-RAN systems.Comment: In Proceedings of the IEEE International Conference on Autonomic Computing (ICAC), July 201

    Edge Computing for Extreme Reliability and Scalability

    Get PDF
    The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
    corecore