1,832 research outputs found

    Scheduling periodic messages on a shared link

    Full text link
    Cloud-RAN is a recent architecture for mobile networks where the processing units are located in distant data centers while, until now, they were attached to antennas. The main challenge, to fulfill protocol constraints, is to guarantee low latency for the periodic messages sent from each antenna to its processing unit and back. The problem we address is to find a periodic sending scheme of these messages \emph{without contention nor buffering}, when all messages are of the same size and the period is fixed. We study the periodic message assignment problem modeling this situation on a common topology, where contention arises from a single link shared by all antennas. The problem is reminiscent of coupled-task scheduling, but the periodicity introduces a new twist. We study how the problem behaves with regard to the \emph{load} of the shared link. The main contributions are polynomial-time algorithms which \emph{always} find a solution for an arbitrary size of messages and load at most 2/52/5 or for messages of size one and load at most ϕ−1\phi - 1, the golden ratio conjugate. We also prove that a randomized greedy algorithm finds a solution on almost all instances with high probability, explaining why most greedy algorithms work so well in practice.Comment: 23 pages, 18 figure

    Real-Time Containers: A Survey

    Get PDF
    Container-based virtualization has gained a significant importance in a deployment of software applications in cloud-based environments. The technology fully relies on operating system features and does not require a virtualization layer (hypervisor) that introduces a performance degradation. Container-based virtualization allows to co-locate multiple isolated containers on a single computation node as well as to decompose an application into multiple containers distributed among several hosts (e.g., in fog computing layer). Such a technology seems very promising in other domains as well, e.g., in industrial automation, automotive, and aviation industry where mixed criticality containerized applications from various vendors can be co-located on shared resources. However, such industrial domains often require real-time behavior (i.e, a capability to meet predefined deadlines). These capabilities are not fully supported by the container-based virtualization yet. In this work, we provide a systematic literature survey study that summarizes the effort of the research community on bringing real-time properties in container-based virtualization. We categorize existing work into main research areas and identify possible immature points of the technology

    Latency-aware Radio Resource Allocation over Cloud RAN for Industry 4.0

    Get PDF
    The notion of Cloud RAN is taking a prominent role in narrative for the next generation wireless infrastructure. It is also seen as a mean to industrial communication systems. In order to provide reliable wireless connectivity for industrial deployments, by conventional means, the cloud infrastructure needs to be reliable and incur little latency, which however, is contradictory to the stochastic nature of cloud infrastructures. In this paper, we investigate the impact of stochastic delay on a radio resource allocation process deployed in Cloud RAN. We proceed to propose a strategy for realizing timely cloud responses and then adapt that strategy to a radio resource allocation problem. Further, we evaluate the strategies in an industrial IoT scenario using a simulated environment. Experimentation shows that, with our proposed strategy, a significant performance improvement on timely responses can be achieved even with noisy cloud environment. Improvements in resource utilization can be also attained for a resource allocation process deployed over Cloud RAN with this strategy

    Uncovering Bugs in Distributed Storage Systems during Testing (not in Production!)

    Get PDF
    Testing distributed systems is challenging due to multiple sources of nondeterminism. Conventional testing techniques, such as unit, integration and stress testing, are ineffective in preventing serious but subtle bugs from reaching production. Formal techniques, such as TLA+, can only verify high-level specifications of systems at the level of logic-based models, and fall short of checking the actual executable code. In this paper, we present a new methodology for testing distributed systems. Our approach applies advanced systematic testing techniques to thoroughly check that the executable code adheres to its high-level specifications, which significantly improves coverage of important system behaviors. Our methodology has been applied to three distributed storage systems in the Microsoft Azure cloud computing platform. In the process, numerous bugs were identified, reproduced, confirmed and fixed. These bugs required a subtle combination of concurrency and failures, making them extremely difficult to find with conventional testing techniques. An important advantage of our approach is that a bug is uncovered in a small setting and witnessed by a full system trace, which dramatically increases the productivity of debugging

    Integration of Clouds to Industrial Communication Networks

    Get PDF
    Cloud computing, owing to its ubiquitousness, scalability and on-demand ac- cess, has transformed into many traditional sectors, such as telecommunication and manufacturing production. As the Fifth Generation Wireless Specifica- tions (5G) emerges, the demand on ubiquitous and re-configurable computing resources for handling tremendous traffic from omnipresent mobile devices has been put forward. And therein lies the adaption of cloud-native model in service delivery of telecommunication networks. However, it takes phased approaches to successfully transform the traditional Telco infrastructure to a softwarized model, especially for Radio Access Networks (RANs), which, as of now, mostly relies on purpose-built Digital Signal Processors (DSPs) for computing and processing tasks.On the other hand, Industry 4.0 is leading the digital transformation in manufacturing sectors, wherein the industrial networks is evolving towards wireless connectivity and the automation process managements are shifting to clouds. However, such integration may introduce unwanted disturbances to critical industrial automation processes. This leads to challenges to guaran- tee the performance of critical applications under the integration of different systems.In the work presented in this thesis, we mainly explore the feasibility of inte- grating wireless communication, industrial networks and cloud computing. We have mainly investigated the delay-inhibited challenges and the performance impacts of using cloud-native models for critical applications. We design a solution, targeting at diminishing the performance degradation caused by the integration of cloud computing

    Private 5G and its Suitability for Industrial Networking

    Get PDF
    5G was and is still surrounded by many promises and buzzwords, such as the famous 1 ms, real-time, and Ultra-Reliable and Low-Latency Communications (URLLC). This was partly intended to get the attention of vertical industries to become new customers for mobile networks, which shall be deployed in their factories. With the allowance of federal agencies, companies deployed their own private 5G networks to test new use cases enabled by 5G. But what has been missing, apart from all the marketing, is the knowledge of what 5G can really do? Private 5G networks are envisioned to enable new use cases with strict latency requirements, such as robot control. This work has examined in great detail the capabilities of the current 5G Release 15 as private network, and in particular its suitability with regard to time-critical communications. For that, a testbed was designed to measure One-Way Delays (OWDs) and Round-Trip Times (RTTs) with high accuracy. The measurements were conducted in 5G Non-Standalone (NSA) and Standalone (SA) net-works and are the first published results. The evaluation revealed results that were not obvious or identified by previous work. For example, a strong impact of the packet rate on the resulting OWD and RTT was found. It was also found that typically 95% of the SA downlink end-to-end packet delays are in the range of 4 ms to 10 ms, indicating a fairly wide spread of packet delays, with the Inter-Packet Delay Variation (IPDV) between consecutive packets distributed in the millisecond range. Surprisingly, it also seems to matter for the RTT from which direction, i.e. Downlink (DL) or Uplink (UL), a round-trip communication was initiated. Another important factor plays especially the Inter-Arrival Time (IAT) of packets on the RTT distribution. These examples from the results found demonstrate the need to critically examine 5G and any successors in terms of their real-time capabilities. In addition to the end-to-end OWD and RTT, the delays caused by 4G and 5G Core processing has been investigated as well. Current state-of-the-art 4G and 5G Core implementations exhibit long-tailed delay distributions. To overcome such limitations, modern packet processing have been evaluated in terms of their respective tail-latency. The hardware-based solution was able to process packets with deterministic delay, but the software-based solutions also achieved soft real-time results. These results allow the selection of the right technology for use cases depending on their tail-latency requirements. In summary, many insights into the suitability of 5G for time-critical communications were gained from the study of the current 5G Release 15. The measurement framework, analysis methods, and results will inform the further development and refinement of private 5G campus networks for industrial use cases
    • …
    corecore