342 research outputs found

    FIFO anomaly is unbounded

    Get PDF
    Virtual memory of computers is usually implemented by demand paging. For some page replacement algorithms the number of page faults may increase as the number of page frames increases. Belady, Nelson and Shedler constructed reference strings for which page replacement algorithm FIFO produces near twice more page faults in a larger memory than in a smaller one. They formulated the conjecture that 2 is a general bound. We prove that this ratio can be arbitrarily large

    FIFO anomaly is unbounded

    Get PDF

    PLRU Cache Domino Effects

    Get PDF
    Domino effects have been shown to hinder a tight prediction of worst case execution times (WCET) on real-time hardware. First investigated by Lundqvist and Stenström, domino effects caused by pipeline stalls were shows to exist in the PowerPC by Schneider. This paper extends the list of causes of domino effects by showing that the pseudo LRU (PLRU) cache replacement policy can cause unbounded effects on the WCET. PLRU is used in the PowerPC PPC755, which is widely used in embedded systems, and some x86 models

    Timing Anomalies Reloaded

    Get PDF
    Computing tight WCET bounds in the presence of timing anomalies - found in almost any modern hardware architecture - is a major challenge of timing analysis. In this paper, we renew the discussion about timing anomalies, demonstrating that even simple hardware architectures are prone to timing anomalies. We furthermore complete the list of timing-anomalous cache replacement policies, proving that the most-recently-used replacement policy (MRU) also exhibits a domino effect

    P4CEP: Towards In-Network Complex Event Processing

    Full text link
    In-network computing using programmable networking hardware is a strong trend in networking that promises to reduce latency and consumption of server resources through offloading to network elements (programmable switches and smart NICs). In particular, the data plane programming language P4 together with powerful P4 networking hardware has spawned projects offloading services into the network, e.g., consensus services or caching services. In this paper, we present a novel case for in-network computing, namely, Complex Event Processing (CEP). CEP processes streams of basic events, e.g., stemming from networked sensors, into meaningful complex events. Traditionally, CEP processing has been performed on servers or overlay networks. However, we argue in this paper that CEP is a good candidate for in-network computing along the communication path avoiding detouring streams to distant servers to minimize communication latency while also exploiting processing capabilities of novel networking hardware. We show that it is feasible to express CEP operations in P4 and also present a tool to compile CEP operations, formulated in our P4CEP rule specification language, to P4 code. Moreover, we identify challenges and problems that we have encountered to show future research directions for implementing full-fledged in-network CEP systems.Comment: 6 pages. Author's versio

    On the Completeness of Verifying Message Passing Programs under Bounded Asynchrony

    Get PDF
    We address the problem of verifying message passing programs, defined as a set of parallel processes communicating through unbounded FIFO buffers. We introduce a bounded analysis that explores a special type of computations, called k-synchronous. These computations can be viewed as (unbounded) sequences of interaction phases, each phase allowing at most k send actions (by different processes), followed by a sequence of receives corresponding to sends in the same phase. We give a procedure for deciding k-synchronizability of a program, i.e., whether every computation is equivalent (has the same happens-before relation) to one of its k-synchronous computations. We also show that reachability over k-synchronous computations and checking k-synchronizability are both PSPACE-complete. Furthermore, we introduce a class of programs called {\em flow-bounded} for which the problem of deciding whether there exists a k>0 for which the program is k-synchronizable, is decidable

    Kahn Process Networks and a Reactive Extension

    Full text link
    Kahn and MacQueen have introduced a generic class of determinate asynchronous data-flow applications, called Kahn Process Networks (KPNs) with an elegant mathematical model and semantics in terms of Scott-continuous functions on data streams together with an implementation model of independent asynchronous sequential programs communicating through FIFO buffers with blocking read and non-blocking write operations. The two are related by the Kahn Principle which states that a realization according to the implementation model behaves as predicted by the mathematical function. Additional steps are required to arrive at an actual implementation of a KPN to take care of scheduling of independent processes on a single processor and to manage communication buffers. Because of the expressiveness of the KPN model, buffer sizes and schedules cannot be determined at design time in general and require dynamic run-time system support. Constraints are discussed that need to be placed on such system support so as to maintain the Kahn Principle.We then discuss a possible extension of the KPN model to include the possibility for sporadic, reactive behavior which is not possible in the standard model. The extended model is called Reactive Process Networks. We introduce its semantics, look at analyzability and at more constrained data-flow models combined with reactive behavior

    ASiMOV: Microservices-based verifiable control logic with estimable detection delay against cyber-attacks to cyber-physical systems

    Get PDF
    The automatic control in Cyber-Physical-Systems brings advantages but also increased risks due to cyber-attacks. This Ph.D. thesis proposes a novel reference architecture for distributed control applications increasing the security against cyber-attacks to the control logic. The core idea is to replicate each instance of a control application and to detect attacks by verifying their outputs. The verification logic disposes of an exact model of the control logic, although the two logics are decoupled on two different devices. The verification is asynchronous to the feedback control loop, to avoid the introduction of a delay between the controller(s) and system(s). The time required to detect a successful attack is analytically estimable, which enables control-theoretical techniques to prevent damage by appropriate planning decisions. The proposed architecture for a controller and an Intrusion Detection System is composed of event-driven autonomous components (microservices), which can be deployed as separate Virtual Machines (e.g., containers) on cloud platforms. Under the proposed architecture, orchestration techniques enable a dynamic re-deployment acting as a mitigation or prevention mechanism defined at the level of the computer architecture. The proposal, which we call ASiMOV (Asynchronous Modular Verification), is based on a model that separates the state of a controller from the state of its execution environment. We provide details of the model and a microservices implementation. Through the analysis of the delay introduced in both the control loop and the detection of attacks, we provide guidelines to determine which control systems are suitable for adopting ASiMOV. Simulations show the behavior of ASiMOV both in the absence and in the presence of cyber-attacks
    corecore