22,705 research outputs found
Towards Optimal Synchronous Counting
Consider a complete communication network of nodes, where the nodes
receive a common clock pulse. We study the synchronous -counting problem:
given any starting state and up to faulty nodes with arbitrary behaviour,
the task is to eventually have all correct nodes counting modulo in
agreement. Thus, we are considering algorithms that are self-stabilizing
despite Byzantine failures. In this work, we give new algorithms for the
synchronous counting problem that (1) are deterministic, (2) have linear
stabilisation time in , (3) use a small number of states, and (4) achieve
almost-optimal resilience. Prior algorithms either resort to randomisation, use
a large number of states, or have poor resilience. In particular, we achieve an
exponential improvement in the space complexity of deterministic algorithms,
while still achieving linear stabilisation time and almost-linear resilience.Comment: 17 pages, 2 figure
Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion
This document is the Accepted Manuscript version of the following article: Laura Crucianelli, Charlotte Krahe, Paul M. Jenkinson, Aikaterini (Katerina) Fotopoulou, 'Interoceptive Ingredients of Body Ownership: Affective Touch and Cardiac Awareness in the Rubber Hand Illusion', Cortex, first published online 1 May 2017, available at doi: https://doi.org/10.1016/j.cortex.2017.04.018. © 2017 Elsevier Ltd. All rights reserved.The sense of body ownership represents a fundamental aspect of bodily self-consciousness. Using multisensory integration paradigms, recent studies have shown that both exteroceptive and interoceptive information contribute to our sense of body ownership. Interoception refers to the physiological sense of the condition of the body, including afferent signals that originate inside the body and outside the body. However, it remains unclear whether individual sensitivity to interoceptive modalities is unitary or differs between modalities. It is also unclear whether the effect of interoceptive information on body ownership is caused by exteroceptive ‘visual capture’ of these modalities, or by bottom-up processing of interoceptive information. This study aimed to test these questions in two separate samples. In the first experiment (N = 76), we examined the relationship between two different interoceptive modalities, namely cardiac awareness based on a heartbeat counting task, and affective touch perception based on stimulation of a specialized C tactile (CT) afferent system. This is an interoceptive modality of affective and social significance. In a second experiment (N = 63), we explored whether ‘off-line’ trait interoceptive sensitivity based on a heartbeat counting task would modulate the extent to which CT affective touch influences the multisensory process during the rubber hand illusion (RHI). We found that affective touch enhanced the subjective experience of body ownership during the RHI. Nevertheless, interoceptive sensitivity, as measured by a heartbeat counting task, did not modulate this effect, nor did it relate to the perception of ownership or of CT-optimal affective touch more generally. By contrast, this trait measure of interoceptive sensitivity appeared most relevant when the multisensory context of interoception was ambiguous, suggesting that the perception of interoceptive signals and their effects on body ownership may depend on individual abilities to regulate the balance of interoception and exteroception in given contexts.Peer reviewedFinal Accepted Versio
Synchronous Counting and Computational Algorithm Design
Consider a complete communication network on nodes, each of which is a
state machine. In synchronous 2-counting, the nodes receive a common clock
pulse and they have to agree on which pulses are "odd" and which are "even". We
require that the solution is self-stabilising (reaching the correct operation
from any initial state) and it tolerates Byzantine failures (nodes that
send arbitrary misinformation). Prior algorithms are expensive to implement in
hardware: they require a source of random bits or a large number of states.
This work consists of two parts. In the first part, we use computational
techniques (often known as synthesis) to construct very compact deterministic
algorithms for the first non-trivial case of . While no algorithm exists
for , we show that as few as 3 states per node are sufficient for all
values . Moreover, the problem cannot be solved with only 2 states per
node for , but there is a 2-state solution for all values .
In the second part, we develop and compare two different approaches for
synthesising synchronous counting algorithms. Both approaches are based on
casting the synthesis problem as a propositional satisfiability (SAT) problem
and employing modern SAT-solvers. The difference lies in how to solve the SAT
problem: either in a direct fashion, or incrementally within a counter-example
guided abstraction refinement loop. Empirical results suggest that the former
technique is more efficient if we want to synthesise time-optimal algorithms,
while the latter technique discovers non-optimal algorithms more quickly.Comment: 35 pages, extended and revised versio
Fast and compact self-stabilizing verification, computation, and fault detection of an MST
This paper demonstrates the usefulness of distributed local verification of
proofs, as a tool for the design of self-stabilizing algorithms.In particular,
it introduces a somewhat generalized notion of distributed local proofs, and
utilizes it for improving the time complexity significantly, while maintaining
space optimality. As a result, we show that optimizing the memory size carries
at most a small cost in terms of time, in the context of Minimum Spanning Tree
(MST). That is, we present algorithms that are both time and space efficient
for both constructing an MST and for verifying it.This involves several parts
that may be considered contributions in themselves.First, we generalize the
notion of local proofs, trading off the time complexity for memory efficiency.
This adds a dimension to the study of distributed local proofs, which has been
gaining attention recently. Specifically, we design a (self-stabilizing) proof
labeling scheme which is memory optimal (i.e., bits per node), and
whose time complexity is in synchronous networks, or time in asynchronous ones, where is the maximum degree of
nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991).
We also show that time is necessary, even in synchronous
networks. Another property is that if faults occurred, then, within the
requireddetection time above, they are detected by some node in the locality of each of the faults.Second, we show how to enhance a known
transformer that makes input/output algorithms self-stabilizing. It now takes
as input an efficient construction algorithm and an efficient self-stabilizing
proof labeling scheme, and produces an efficient self-stabilizing algorithm.
When used for MST, the transformer produces a memory optimal self-stabilizing
algorithm, whose time complexity, namely, , is significantly better even
than that of previous algorithms. (The time complexity of previous MST
algorithms that used memory bits per node was , and
the time for optimal space algorithms was .) Inherited from our proof
labelling scheme, our self-stabilising MST construction algorithm also has the
following two properties: (1) if faults occur after the construction ended,
then they are detected by some nodes within time in synchronous
networks, or within time in asynchronous ones, and (2) if
faults occurred, then, within the required detection time above, they are
detected within the locality of each of the faults. We also show
how to improve the above two properties, at the expense of some increase in the
memory
The Parallelism Motifs of Genomic Data Analysis
Genomic data sets are growing dramatically as the cost of sequencing
continues to decline and small sequencing devices become available. Enormous
community databases store and share this data with the research community, but
some of these genomic data analysis problems require large scale computational
platforms to meet both the memory and computational requirements. These
applications differ from scientific simulations that dominate the workload on
high end parallel systems today and place different requirements on programming
support, software libraries, and parallel architectural design. For example,
they involve irregular communication patterns such as asynchronous updates to
shared data structures. We consider several problems in high performance
genomics analysis, including alignment, profiling, clustering, and assembly for
both single genomes and metagenomes. We identify some of the common
computational patterns or motifs that help inform parallelization strategies
and compare our motifs to some of the established lists, arguing that at least
two key patterns, sorting and hashing, are missing
Brain-Switches for Asynchronous Brain−Computer Interfaces: A Systematic Review
A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance
Survey-propagation decimation through distributed local computations
We discuss the implementation of two distributed solvers of the random K-SAT
problem, based on some development of the recently introduced
survey-propagation (SP) algorithm. The first solver, called the "SP diffusion
algorithm", diffuses as dynamical information the maximum bias over the system,
so that variable nodes can decide to freeze in a self-organized way, each
variable making its decision on the basis of purely local information. The
second solver, called the "SP reinforcement algorithm", makes use of
time-dependent external forcing messages on each variable, which let the
variables get completely polarized in the direction of a solution at the end of
a single convergence. Both methods allow us to find a solution of the random
3-SAT problem in a range of parameters comparable with the best previously
described serialized solvers. The simulated time of convergence towards a
solution (if these solvers were implemented on a distributed device) grows as
log(N).Comment: 18 pages, 10 figure
- …