2,360 research outputs found

    Queueing models for token and slotted ring networks

    Get PDF
    Currently the end-to-end delay characteristics of very high speed local area networks are not well understood. The transmission speed of computer networks is increasing, and local area networks especially are finding increasing use in real time systems. Ring networks operation is generally well understood for both token rings and slotted rings. There is, however, a severe lack of queueing models for high layer operation. There are several factors which contribute to the processing delay of a packet, as opposed to the transmission delay, e.g., packet priority, its length, the user load, the processor load, the use of priority preemption, the use of preemption at packet reception, the number of processors, the number of protocol processing layers, the speed of each processor, and queue length limitations. Currently existing medium access queueing models are extended by adding modeling techniques which will handle exhaustive limited service both with and without priority traffic, and modeling capabilities are extended into the upper layers of the OSI model. Some of the model are parameterized solution methods, since it is shown that certain models do not exist as parameterized solutions, but rather as solution methods

    Parameterized Synthesis

    Full text link
    We study the synthesis problem for distributed architectures with a parametric number of finite-state components. Parameterized specifications arise naturally in a synthesis setting, but thus far it was unclear how to detect realizability and how to perform synthesis in a parameterized setting. Using a classical result from verification, we show that for a class of specifications in indexed LTL\X, parameterized synthesis in token ring networks is equivalent to distributed synthesis in a network consisting of a few copies of a single process. Adapting a well-known result from distributed synthesis, we show that the latter problem is undecidable. We describe a semi-decision procedure for the parameterized synthesis problem in token rings, based on bounded synthesis. We extend the approach to parameterized synthesis in token-passing networks with arbitrary topologies, and show applicability on a simple case study. Finally, we sketch a general framework for parameterized synthesis based on cutoffs and other parameterized verification techniques.Comment: Extended version of TACAS 2012 paper, 29 page

    Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)

    Full text link
    We consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. We introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method is incremental and exploits both Angluin-style L*-learning and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape

    Self-Stabilizing Repeated Balls-into-Bins

    Full text link
    We study the following synchronous process that we call "repeated balls-into-bins". The process is started by assigning nn balls to nn bins in an arbitrary way. In every subsequent round, from each non-empty bin one ball is chosen according to some fixed strategy (random, FIFO, etc), and re-assigned to one of the nn bins uniformly at random. We define a configuration "legitimate" if its maximum load is O(logn)\mathcal{O}(\log n). We prove that, starting from any configuration, the process will converge to a legitimate configuration in linear time and then it will only take on legitimate configurations over a period of length bounded by any polynomial in nn, with high probability (w.h.p.). This implies that the process is self-stabilizing and that every ball traverses all bins in O(nlog2n)\mathcal{O}(n \log^2 n) rounds, w.h.p

    Optimal Gossip Algorithms for Exact and Approximate Quantile Computations

    Full text link
    This paper gives drastically faster gossip algorithms to compute exact and approximate quantiles. Gossip algorithms, which allow each node to contact a uniformly random other node in each round, have been intensely studied and been adopted in many applications due to their fast convergence and their robustness to failures. Kempe et al. [FOCS'03] gave gossip algorithms to compute important aggregate statistics if every node is given a value. In particular, they gave a beautiful O(logn+log1ϵ)O(\log n + \log \frac{1}{\epsilon}) round algorithm to ϵ\epsilon-approximate the sum of all values and an O(log2n)O(\log^2 n) round algorithm to compute the exact ϕ\phi-quantile, i.e., the the ϕn\lceil \phi n \rceil smallest value. We give an quadratically faster and in fact optimal gossip algorithm for the exact ϕ\phi-quantile problem which runs in O(logn)O(\log n) rounds. We furthermore show that one can achieve an exponential speedup if one allows for an ϵ\epsilon-approximation. We give an O(loglogn+log1ϵ)O(\log \log n + \log \frac{1}{\epsilon}) round gossip algorithm which computes a value of rank between ϕn\phi n and (ϕ+ϵ)n(\phi+\epsilon)n at every node.% for any 0ϕ10 \leq \phi \leq 1 and 0<ϵ<10 < \epsilon < 1. Our algorithms are extremely simple and very robust - they can be operated with the same running times even if every transmission fails with a, potentially different, constant probability. We also give a matching Ω(loglogn+log1ϵ)\Omega(\log \log n + \log \frac{1}{\epsilon}) lower bound which shows that our algorithm is optimal for all values of ϵ\epsilon

    NetMod: A Design Tool for Large-Scale Heterogeneous Campus Networks

    Full text link
    The Network Modeling Tool (NetMod) uses simple analytical models to provide the designers of large interconnected local area networks with an in-depth analysis of the potential performance of these systems. This tool can be used in either a university, industrial, or governmental campus networking environment consisting of thousands of computer sites. NetMod is implemented with a combination of the easy-to-use Macintosh software packages HyperCard and Excel. The objectives of NetMod, the analytical models, and the user interface are described in detail along with its application to an actual campus-wide network.http://deepblue.lib.umich.edu/bitstream/2027.42/107971/1/citi-tr-90-1.pd

    Fact, Fiction and Virtual Worlds

    Get PDF
    This paper considers the medium of videogames from a goodmanian standpoint. After some preliminary clarifications and definitions, I examine the ontological status of videogames. Against several existing accounts, I hold that what grounds their identity qua work types is code. The rest of the paper is dedicated to the epistemology of videogaming. Drawing on Nelson Goodman and Catherine Elgin's works, I suggest that the best model to defend videogame cognitivism appeals to the notion of understanding
    corecore