524 research outputs found

    Strong Equivalence Relations for Iterated Models

    Full text link
    The Iterated Immediate Snapshot model (IIS), due to its elegant geometrical representation, has become standard for applying topological reasoning to distributed computing. Its modular structure makes it easier to analyze than the more realistic (non-iterated) read-write Atomic-Snapshot memory model (AS). It is known that AS and IIS are equivalent with respect to \emph{wait-free task} computability: a distributed task is solvable in AS if and only if it solvable in IIS. We observe, however, that this equivalence is not sufficient in order to explore solvability of tasks in \emph{sub-models} of AS (i.e. proper subsets of its runs) or computability of \emph{long-lived} objects, and a stronger equivalence relation is needed. In this paper, we consider \emph{adversarial} sub-models of AS and IIS specified by the sets of processes that can be \emph{correct} in a model run. We show that AS and IIS are equivalent in a strong way: a (possibly long-lived) object is implementable in AS under a given adversary if and only if it is implementable in IIS under the same adversary. %This holds whether the object is one-shot or long-lived. Therefore, the computability of any object in shared memory under an adversarial AS scheduler can be equivalently investigated in IIS

    Read-Write Memory and k-Set Consensus as an Affine Task

    Get PDF
    The wait-free read-write memory model has been characterized as an iterated \emph{Immediate Snapshot} (IS) task. The IS task is \emph{affine}---it can be defined as a (sub)set of simplices of the standard chromatic subdivision. It is known that the task of \emph{Weak Symmetry Breaking} (WSB) cannot be represented as an affine task. In this paper, we highlight the phenomenon of a "natural" model that can be captured by an iterated affine task and, thus, by a subset of runs of the iterated immediate snapshot model. We show that the read-write memory model in which, additionally, kk-set-consensus objects can be used is, unlike WSB, "natural" by presenting the corresponding simple affine task captured by a subset of 22-round IS runs. Our results imply the first combinatorial characterization of models equipped with abstractions other than read-write memory that applies to generic tasks

    A generalized asynchronous computability theorem

    Full text link
    We consider the models of distributed computation defined as subsets of the runs of the iterated immediate snapshot model. Given a task TT and a model MM, we provide topological conditions for TT to be solvable in MM. When applied to the wait-free model, our conditions result in the celebrated Asynchronous Computability Theorem (ACT) of Herlihy and Shavit. To demonstrate the utility of our characterization, we consider a task that has been shown earlier to admit only a very complex tt-resilient solution. In contrast, our generalized computability theorem confirms its tt-resilient solvability in a straightforward manner.Comment: 16 pages, 5 figure

    k-Set Agreement in Communication Networks with Omission Faults

    Get PDF
    We consider an arbitrary communication network G where at most f messages can be lost at each round, and consider the classical k-set agreement problem in this setting. We characterize exactly for which f the k-set agreement problem can be solved on G. The case with k = 1, that is the Consensus problem, has first been introduced by Santoro and Widmayer in 1989, the characterization is already known from [Coulouma/Godard/Peters, TCS, 2015]. As a first contribution, we present a detailed and complete characterization for the 2-set problem. The proof of the impossibility result uses topological methods. We introduce a new subdivision approach for these topological methods that is of independent interest. In the second part, we show how to extend to the general case with k in N. This characterization is the first complete characterization for this kind of synchronous message passing model, a model that is a subclass of the family of oblivious message adversaries

    Notes on Theory of Distributed Systems

    Full text link
    Notes for the Yale course CPSC 465/565 Theory of Distributed Systems

    Termination Detection of Local Computations

    Full text link
    Contrary to the sequential world, the processes involved in a distributed system do not necessarily know when a computation is globally finished. This paper investigates the problem of the detection of the termination of local computations. We define four types of termination detection: no detection, detection of the local termination, detection by a distributed observer, detection of the global termination. We give a complete characterisation (except in the local termination detection case where a partial one is given) for each of this termination detection and show that they define a strict hierarchy. These results emphasise the difference between computability of a distributed task and termination detection. Furthermore, these characterisations encompass all standard criteria that are usually formulated : topological restriction (tree, rings, or triangu- lated networks ...), topological knowledge (size, diameter ...), and local knowledge to distinguish nodes (identities, sense of direction). These results are now presented as corollaries of generalising theorems. As a very special and important case, the techniques are also applied to the election problem. Though given in the model of local computations, these results can give qualitative insight for similar results in other standard models. The necessary conditions involve graphs covering and quasi-covering; the sufficient conditions (constructive local computations) are based upon an enumeration algorithm of Mazurkiewicz and a stable properties detection algorithm of Szymanski, Shi and Prywes

    Online disturbance prediction for enhanced availability in smart grids

    Get PDF
    A gradual move in the electric power industry towards Smart Grids brings new challenges to the system's efficiency and dependability. With a growing complexity and massive introduction of renewable generation, particularly at the distribution level, the number of faults and, consequently, disturbances (errors and failures) is expected to increase significantly. This threatens to compromise grid's availability as traditional, reactive management approaches may soon become insufficient. On the other hand, with grids' digitalization, real-time status data are becoming available. These data may be used to develop advanced management and control methods for a sustainable, more efficient and more dependable grid. A proactive management approach, based on the use of real-time data for predicting near-future disturbances and acting in their anticipation, has already been identified by the Smart Grid community as one of the main pillars of dependability of the future grid. The work presented in this dissertation focuses on predicting disturbances in Active Distributions Networks (ADNs) that are a part of the Smart Grid that evolves the most. These are distribution networks with high share of (renewable) distributed generation and with systems in place for real-time monitoring and control. Our main goal is to develop a methodology for proactive network management, in a sense of proactive mitigation of disturbances, and to design and implement a method for their prediction. We focus on predicting voltage sags as they are identified as one of the most frequent and severe disturbances in distribution networks. We address Smart Grid dependability in a holistic manner by considering its cyber and physical aspects. As a result, we identify Smart Grid dependability properties and develop a taxonomy of faults that contribute to better understanding of the overall dependability of the future grid. As the process of grid's digitization is still ongoing there is a general problem of a lack of data on the grid's status and especially disturbance-related data. These data are necessary to design an accurate disturbance predictor. To overcome this obstacle we introduce a concept of fault injection to simulation of power systems. We develop a framework to simulate a behavior of distribution networks in the presence of faults, and fluctuating generation and load that, alone or combined, may cause disturbances. With the framework we generate a large set of data that we use to develop and evaluate a voltage-sag disturbance predictor. To quantify how prediction and proactive mitigation of disturbances enhance availability we create an availability model of a proactive management. The model is generic and may be applied to evaluate the effect of proactive management on availability in other types of systems, and adapted for quantifying other types of properties as well. Also, we design a metric and a method for optimizing failure prediction to maximize availability with proactive approach. In our conclusion, the level of availability improvement with proactive approach is comparable to the one when using high-reliability and costly components. Following the results of the case study conducted for a 14-bus ADN, grid's availability may be improved by up to an order of magnitude if disturbances are managed proactively instead of reactively. The main results and contributions may be summarized as follows: (i) Taxonomy of faults in Smart Grid has been developed; (ii) Methodology and methods for proactive management of disturbances have been proposed; (iii) Model to quantify availability with proactive management has been developed; (iv) Simulation and fault-injection framework has been designed and implemented to generate disturbance-related data; (v) In the scope of a case study, a voltage-sag predictor, based on machine- learning classification algorithms, has been designed and the effect of proactive disturbance management on downtime and availability has been quantified

    Remote identification of overhead conductor materials

    Get PDF
    The recent draft determination by Australian Energy Regulator has meant a probable large reduction of approximately 30% in the income able to be generated by Ergon Energy. This recent ruling has increased the focus on cost effective and timely solutions to problems and encouraged continual evaluation of emerging technology which may facilitate these solutions. This focus has become a primary consideration in the organisation. The five main types of conductor present in the Ergon Energy bare overhead network in copper, galvanised steel, steel re-enforced aluminium, aluminium and aluminium alloy, which all age in a variety of ways when exposed to the elements. With over 1,800 circuit kilometres of unidentified conductor in the Ergon Energy network and a further unknown amount of incorrectly identified conductor, suitably managing the risk of conductor failure in a targeted, efficient, and timely manner is problematic. Spectral analysis is rapidly maturing as a technology with recent uses including satellite imagery to identify mineral deposits, analysis of distant planets, and also includes uses such as identifying cancerous growths in the human body, identifying scrap metals, evaluating the contamination of land by various contaminants, and helping naval vessels avoid mines by identification of these metal objects in the ocean. Initial measurements in the visible spectrum were taken with a low cost commercial USB plug and play spectrometer which identified significant differences in the spectral responses of copper, steel and aluminium. This spectrometer gave relatively constant results for aluminium, and galvanised steel under various lighting conditions, sample ages, and sample sizes. Consistent results were also obtained of various copper sample sizes and lighting conditions, however the variable surface patina’s due to weathering resulted in inconsistent results. The spectrometer could not discern between all aluminium conductor (AAC) and all aluminium alloy conductor (AAAC). An attempt was made to build an image spectrometer from a Nikon D700 consumer camera which was partially successful. The device was successful in recording relatively accurately the dominant wavelengths of a compact fluorescent light source and did record two measurements of aluminium conductor with moderate accuracy. Difficulties were encountered with aligning the optical path, and artefacts being introduced in the optical path

    Neuroeconomics: How Neuroscience Can Inform Economics

    Get PDF
    Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified ß-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information
    corecore