29,820 research outputs found

    Bounds for self-stabilization in unidirectional networks

    Get PDF
    A distributed algorithm is self-stabilizing if after faults and attacks hit the system and place it in some arbitrary global state, the systems recovers from this catastrophic situation without external intervention in finite time. Unidirectional networks preclude many common techniques in self-stabilization from being used, such as preserving local predicates. In this paper, we investigate the intrinsic complexity of achieving self-stabilization in unidirectional networks, and focus on the classical vertex coloring problem. When deterministic solutions are considered, we prove a lower bound of nn states per process (where nn is the network size) and a recovery time of at least n(n1)/2n(n-1)/2 actions in total. We present a deterministic algorithm with matching upper bounds that performs in arbitrary graphs. When probabilistic solutions are considered, we observe that at least Δ+1\Delta + 1 states per process and a recovery time of Ω(n)\Omega(n) actions in total are required (where Δ\Delta denotes the maximal degree of the underlying simple undirected graph). We present a probabilistically self-stabilizing algorithm that uses k\mathtt{k} states per process, where k\mathtt{k} is a parameter of the algorithm. When k=Δ+1\mathtt{k}=\Delta+1, the algorithm recovers in expected O(Δn)O(\Delta n) actions. When k\mathtt{k} may grow arbitrarily, the algorithm recovers in expected O(n) actions in total. Thus, our algorithm can be made optimal with respect to space or time complexity

    Time-Varying Graphs and Dynamic Networks

    Full text link
    The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems -- delay-tolerant networks, opportunistic-mobility networks, social networks -- obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe; and the formal models proposed so far to express some specific concepts are components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms, and results found in the literature into a unified framework, which we call TVG (for time-varying graphs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are a-temporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.Comment: A short version appeared in ADHOC-NOW'11. This version is to be published in Internation Journal of Parallel, Emergent and Distributed System

    Exact two-terminal reliability of some directed networks

    Full text link
    The calculation of network reliability in a probabilistic context has long been an issue of practical and academic importance. Conventional approaches (determination of bounds, sums of disjoint products algorithms, Monte Carlo evaluations, studies of the reliability polynomials, etc.) only provide approximations when the network's size increases, even when nodes do not fail and all edges have the same reliability p. We consider here a directed, generic graph of arbitrary size mimicking real-life long-haul communication networks, and give the exact, analytical solution for the two-terminal reliability. This solution involves a product of transfer matrices, in which individual reliabilities of edges and nodes are taken into account. The special case of identical edge and node reliabilities (p and rho, respectively) is addressed. We consider a case study based on a commonly-used configuration, and assess the influence of the edges being directed (or not) on various measures of network performance. While the two-terminal reliability, the failure frequency and the failure rate of the connection are quite similar, the locations of complex zeros of the two-terminal reliability polynomials exhibit strong differences, and various structure transitions at specific values of rho. The present work could be extended to provide a catalog of exactly solvable networks in terms of reliability, which could be useful as building blocks for new and improved bounds, as well as benchmarks, in the general case

    Mechanism Design and Communication Networks

    Get PDF
    This paper characterizes the class of communication networks for which, in any environment (utilities and beliefs), every incentive-compatible social choice function is (partially) implementable. Among others, in environments with either common and independent beliefs and private values or a bad outcome, we show that if the communication network is 2-connected, then any incentive-compatible social choice function is implementable. A network is 2-connected if each player is either directly connected to the designer or indirectly connected to the designer through at least two disjoint paths. We couple encryption techniques together with appropriate incentives to secure the transmission of each player’s private information to the designer.Mechanism design; incentives; Bayesian equilibrium; communication networks; encryption; secure transmission; coding

    Apperceptive patterning: Artefaction, extensional beliefs and cognitive scaffolding

    Get PDF
    In “Psychopower and Ordinary Madness” my ambition, as it relates to Bernard Stiegler’s recent literature, was twofold: 1) critiquing Stiegler’s work on exosomatization and artefactual posthumanism—or, more specifically, nonhumanism—to problematize approaches to media archaeology that rely upon technical exteriorization; 2) challenging how Stiegler engages with Giuseppe Longo and Francis Bailly’s conception of negative entropy. These efforts were directed by a prevalent techno-cultural qualifier: the rise of Synthetic Intelligence (including neural nets, deep learning, predictive processing and Bayesian models of cognition). This paper continues this project but first directs a critical analytic lens at the Derridean practice of the ontologization of grammatization from which Stiegler emerges while also distinguishing how metalanguages operate in relation to object-oriented environmental interaction by way of inferentialism. Stalking continental (Kapp, Simondon, Leroi-Gourhan, etc.) and analytic traditions (e.g., Carnap, Chalmers, Clark, Sutton, Novaes, etc.), we move from artefacts to AI and Predictive Processing so as to link theories related to technicity with philosophy of mind. Simultaneously drawing forth Robert Brandom’s conceptualization of the roles that commitments play in retrospectively reconstructing the social experiences that lead to our endorsement(s) of norms, we compliment this account with Reza Negarestani’s deprivatized account of intelligence while analyzing the equipollent role between language and media (both digital and analog)
    corecore