122 research outputs found

    Weak vs. Self vs. Probabilistic Stabilization

    Get PDF
    Self-stabilization is a strong property that guarantees that a network always resume correct behavior starting from an arbitrary initial state. Weaker guarantees have later been introduced to cope with impossibility results: probabilistic stabilization only gives probabilistic convergence to a correct behavior. Also, weak stabilization only gives the possibility of convergence. In this paper, we investigate the relative power of weak, self, and probabilistic stabilization, with respect to the set of problems that can be solved. We formally prove that in that sense, weak stabilization is strictly stronger that self-stabilization. Also, we refine previous results on weak stabilization to prove that, for practical schedule instances, a deterministic weak-stabilizing protocol can be turned into a probabilistic self-stabilizing one. This latter result hints at more practical use of weak-stabilization, as such algorthms are easier to design and prove than their (probabilistic) self-stabilizing counterparts

    Multicore Acceleration for Priority Based Schedulers for Concurrency Bug Detection

    Get PDF
    Testing multithreaded programs is difficult as threads can interleave in a nondeterministic fashion. Untested interleavings can cause failures, but testing all interleavings is infeasible. Many interleaving exploration strategies for bug detection have been proposed, but their relative effectiveness and performance remains unclear as they often lack publicly available implementations and have not been evaluated using common benchmarks. We describe NeedlePoint, an open-source framework that allows selection and comparison of a wide range of interleaving exploration policies for bug detection proposed by prior work. Our experience with NeedlePoint indicates that priority-based probabilistic concurrency testing (the PCT algorithm) finds bugs quickly, but it runs only one thread at a time, which destroys parallelism by serializing executions. To address this problem we propose a parallel version of the PCT algorithm (PPCT).We show that the new algorithm outperforms the original by a factor of 5x when testing parallel programs on an eight-core machine. We formally prove that parallel PCT provides the same probabilistic coverage guarantees as PCT. Moreover, PPCT is the first algorithm that runs multiple threads while providing coverage guarantees

    Improved CRPD analysis and a secure scheduler against information leakage in real-time systems

    Get PDF
    Real-time systems are widely applied to the time-critical fields. In order to guarantee that all tasks can be completed on time, predictability becomes a necessary factor when designing a real-time system. Due to more and more requirements about the performance in the real-time embedded system, the cache memory is introduced to the real-time embedded systems. However, the cache behavior is difficult to predict since the data will be loaded either on the cache or the memory. In order to taking the unexpected overhead, execution time are often enlarged by a certain (huge) factor. However, this will cause a waste of computation resource. Hence, in this thesis, we first integrate the cache-related preemption delay to the previous global earliest deadline first schedulability analysis in the direct-mapped cache. Moreover, several analyses for tighter G-EDF schedulability tests are conducted based on the refined estimation of the maximal number of preemptions. The experimental study is conducted to demonstrate the performance of the proposed methods. Furthermore, Under the classic scheduling mechanisms, the execution patterns of tasks on such a system can be easily derived. Therefore, in the second part of the thesis, a novel scheduler, roulette wheel scheduler (RWS), is proposed to randomize the task execution pattern. Unlike traditional schedulers, RWS assigns probabilities to each task at predefined scheduling points, and the choice for execution is randomized, such that the execution pattern is no longer fixed. We apply the concept of schedule entropy to measure the amount of uncertainty introduced by any randomized scheduler, which reflects the unlikelihood of for such attacks to success. Comparing to existing randomized scheduler that gives all eligible tasks equal likelihood at a given time point, the proposed method adjusted such values so that the entropy can be greatly increased --Abstract, page iii

    Computing Probabilistic Bisimilarity Distances for Probabilistic Automata

    Get PDF
    The probabilistic bisimilarity distance of Deng et al. has been proposed as a robust quantitative generalization of Segala and Lynch's probabilistic bisimilarity for probabilistic automata. In this paper, we present a characterization of the bisimilarity distance as the solution of a simple stochastic game. The characterization gives us an algorithm to compute the distances by applying Condon's simple policy iteration on these games. The correctness of Condon's approach, however, relies on the assumption that the games are stopping. Our games may be non-stopping in general, yet we are able to prove termination for this extended class of games. Already other algorithms have been proposed in the literature to compute these distances, with complexity in UP∩coUP\textbf{UP} \cap \textbf{coUP} and \textbf{PPAD}. Despite the theoretical relevance, these algorithms are inefficient in practice. To the best of our knowledge, our algorithm is the first practical solution. The characterization of the probabilistic bisimilarity distance mentioned above crucially uses a dual presentation of the Hausdorff distance due to M\'emoli. As an additional contribution, in this paper we show that M\'emoli's result can be used also to prove that the bisimilarity distance bounds the difference in the maximal (or minimal) probability of two states to satisfying arbitrary ω\omega-regular properties, expressed, eg., as LTL formulas

    Relaxed Schedulers Can Efficiently Parallelize Iterative Algorithms

    Full text link
    There has been significant progress in understanding the parallelism inherent to iterative sequential algorithms: for many classic algorithms, the depth of the dependence structure is now well understood, and scheduling techniques have been developed to exploit this shallow dependence structure for efficient parallel implementations. A related, applied research strand has studied methods by which certain iterative task-based algorithms can be efficiently parallelized via relaxed concurrent priority schedulers. These allow for high concurrency when inserting and removing tasks, at the cost of executing superfluous work due to the relaxed semantics of the scheduler. In this work, we take a step towards unifying these two research directions, by showing that there exists a family of relaxed priority schedulers that can efficiently and deterministically execute classic iterative algorithms such as greedy maximal independent set (MIS) and matching. Our primary result shows that, given a randomized scheduler with an expected relaxation factor of kk in terms of the maximum allowed priority inversions on a task, and any graph on nn vertices, the scheduler is able to execute greedy MIS with only an additive factor of poly(kk) expected additional iterations compared to an exact (but not scalable) scheduler. This counter-intuitive result demonstrates that the overhead of relaxation when computing MIS is not dependent on the input size or structure of the input graph. Experimental results show that this overhead can be clearly offset by the gain in performance due to the highly scalable scheduler. In sum, we present an efficient method to deterministically parallelize iterative sequential algorithms, with provable runtime guarantees in terms of the number of executed tasks to completion.Comment: PODC 2018, pages 377-386 in proceeding

    The Spectrum of Strong Behavioral Equivalences for Nondeterministic and Probabilistic Processes

    Full text link
    We present a spectrum of trace-based, testing, and bisimulation equivalences for nondeterministic and probabilistic processes whose activities are all observable. For every equivalence under study, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via deterministic schedulers. We show that the first approach - which compares two resolutions relatively to the probability distributions of all considered events - results in a fragment of the spectrum compatible with the spectrum of behavioral equivalences for fully probabilistic processes. In contrast, the second approach - which compares the probabilities of the events of a resolution with the probabilities of the same events in possibly different resolutions - gives rise to another fragment composed of coarser equivalences that exhibits several analogies with the spectrum of behavioral equivalences for fully nondeterministic processes. Finally, the third approach - which only compares the extremal probabilities of each event stemming from the different resolutions - yields even coarser equivalences that, however, give rise to a hierarchy similar to that stemming from the second approach.Comment: In Proceedings QAPL 2013, arXiv:1306.241
    • …
    corecore