1,716 research outputs found
Consensus using Asynchronous Failure Detectors
The FLP result shows that crash-tolerant consensus is impossible to solve in
asynchronous systems, and several solutions have been proposed for
crash-tolerant consensus under alternative (stronger) models. One popular
approach is to augment the asynchronous system with appropriate failure
detectors, which provide (potentially unreliable) information about process
crashes in the system, to circumvent the FLP impossibility.
In this paper, we demonstrate the exact mechanism by which (sufficiently
powerful) asynchronous failure detectors enable solving crash-tolerant
consensus. Our approach, which borrows arguments from the FLP impossibility
proof and the famous result from CHT, which shows that is a weakest
failure detector to solve consensus, also yields a natural proof to as
a weakest asynchronous failure detector to solve consensus. The use of I/O
automata theory in our approach enables us to model execution in a more
detailed fashion than CHT and also addresses the latent assumptions and
assertions in the original result in CHT
A Basic Compositional Model for Spiking Neural Networks
This paper is part of a project on developing an algorithmic theory of brain
networks, based on stochastic Spiking Neural Network (SNN) models. Inspired by
tasks that seem to be solved in actual brains, we are defining abstract
problems to be solved by these networks. In our work so far, we have developed
models and algorithms for the Winner-Take-All problem from computational
neuroscience [LMP17a,Mus18], and problems of similarity detection and neural
coding [LMP17b]. We plan to consider many other problems and networks,
including both static networks and networks that learn.
This paper is about basic theory for the stochastic SNN model. In particular,
we define a simple version of the model. This version assumes that the neurons'
only state is a Boolean, indicating whether the neuron is firing or not. In
later work, we plan to develop variants of the model with more elaborate state.
We also define an external behavior notion for SNNs, which can be used for
stating requirements to be satisfied by the networks.
We then define a composition operator for SNNs. We prove that our external
behavior notion is "compositional", in the sense that the external behavior of
a composed network depends only on the external behaviors of the component
networks. We also define a hiding operator that reclassifies some output
behavior of an SNN as internal. We give basic results for hiding.
Finally, we give a formal definition of a problem to be solved by an SNN, and
give basic results showing how composition and hiding of networks affect the
problems that they solve. We illustrate our definitions with three examples:
building a circuit out of gates, building an "Attention" network out of a
"Winner-Take-All" network and a "Filter" network, and a toy example involving
combining two networks in a cyclic fashion
Brief Announcement: Integrating Temporal Information to Spatial Information in a Neural Circuit
In this paper, we consider networks of deterministic spiking neurons, firing synchronously at discrete times. We consider the problem of translating temporal information into spatial information in such networks, an important task that is carried out by actual brains. Specifically, we define two problems: "First Consecutive Spikes Counting" and "Total Spikes Counting", which model temporal-coding and rate-coding aspects of temporal-to-spatial translation respectively. Assuming an upper bound of T on the length of the temporal input signal, we design two networks that solve two problems, each using O(log T) neurons and terminating in time T+1. We also prove that these bounds are tight
Partial Reversal Acyclicity
Partial Reversal (PR) is a link reversal algorithm which ensures that the underlying graph structure is destination-oriented and acyclic. These properties of PR make it useful in routing protocols and algorithms for solving leader election and mutual exclusion. While proofs exist to establish the acyclicity property of PR, they rely on assigning labels to either the nodes or the edges in the graph. In this work we present simpler direct proof of the acyclicity property of partial reversal without using any external or dynamic labeling mechanism. First, we provide a simple variant of the PR algorithm, and show that it maintains acyclicity. Next, we present a binary relation which maps the original PR algorithm to the new algorithm, and finally, we conclude that the acyclicity proof applies to the original PR algorithm as well
- …