303 research outputs found
The Relative Power of Composite Loop Agreement Tasks
Loop agreement is a family of wait-free tasks that includes set agreement and
simplex agreement, and was used to prove the undecidability of wait-free
solvability of distributed tasks by read/write memory. Herlihy and Rajsbaum
defined the algebraic signature of a loop agreement task, which consists of a
group and a distinguished element. They used the algebraic signature to
characterize the relative power of loop agreement tasks. In particular, they
showed that one task implements another exactly when there is a homomorphism
between their respective signatures sending one distinguished element to the
other. In this paper, we extend the previous result by defining the composition
of multiple loop agreement tasks to create a new one with the same combined
power. We generalize the original algebraic characterization of relative power
to compositions of tasks. In this way, we can think of loop agreement tasks in
terms of their basic building blocks. We also investigate a category-theoretic
perspective of loop agreement by defining a category of loops, showing that the
algebraic signature is a functor, and proving that our definition of task
composition is the "correct" one, in a categorical sense.Comment: 18 page
Tight Bounds for Connectivity and Set Agreement in Byzantine Synchronous Systems
In this paper, we show that the protocol complex of a Byzantine synchronous
system can remain -connected for up to rounds,
where is the maximum number of Byzantine processes, and .
This topological property implies that rounds are
necessary to solve -set agreement in Byzantine synchronous systems, compared
to rounds in synchronous crash-failure systems. We
also show that our connectivity bound is tight as we indicate solutions to
Byzantine -set agreement in exactly synchronous
rounds, at least when is suitably large compared to . In conclusion, we
see how Byzantine failures can potentially require one extra round to solve
-set agreement, and, for suitably large compared to , at most that
Well-Structured Futures and Cache Locality
In fork-join parallelism, a sequential program is split into a directed
acyclic graph of tasks linked by directed dependency edges, and the tasks are
executed, possibly in parallel, in an order consistent with their dependencies.
A popular and effective way to extend fork-join parallelism is to allow threads
to create futures. A thread creates a future to hold the results of a
computation, which may or may not be executed in parallel. That result is
returned when some thread touches that future, blocking if necessary until the
result is ready.
Recent research has shown that while futures can, of course, enhance
parallelism in a structured way, they can have a deleterious effect on cache
locality. In the worst case, futures can incur deviations, which implies
additional cache misses, where is the number of cache lines, is the
number of processors, is the number of touches, and is the
\emph{computation span}. Since cache locality has a large impact on software
performance on modern multicores, this result is troubling.
In this paper, however, we show that if futures are used in a simple,
disciplined way, then the situation is much better: if each future is touched
only once, either by the thread that created it, or by a thread to which the
future has been passed from the thread that created it, then parallel
executions with work stealing can incur at most additional
cache misses, a substantial improvement. This structured use of futures is
characteristic of many (but not all) parallel applications
An Empirical Study of Speculative Concurrency in Ethereum Smart Contracts
We use historical data to estimate the potential benefit of speculative techniques for executing Ethereum smart contracts in parallel. We replay transaction traces of sampled blocks from the Ethereum blockchain over time, using a simple speculative execution engine. In this engine, miners attempt to execute all transactions in a block in parallel, rolling back those that cause data conflicts. Aborted transactions are then executed sequentially. Validators execute the same schedule as miners.
We find that our speculative technique yields estimated speed-ups starting at about 8-fold in 2016, declining to about 2-fold at the end of 2017, where speed-up is measured using either gas costs or instruction counts. We also observe that a small set of contracts are responsible for many data conflicts resulting from speculative concurrent execution
Distributed Computability in Byzantine Asynchronous Systems
In this work, we extend the topology-based approach for characterizing
computability in asynchronous crash-failure distributed systems to asynchronous
Byzantine systems. We give the first theorem with necessary and sufficient
conditions to solve arbitrary tasks in asynchronous Byzantine systems where an
adversary chooses faulty processes. In our adversarial formulation, outputs of
non-faulty processes are constrained in terms of inputs of non-faulty processes
only. For colorless tasks, an important subclass of distributed problems, the
general result reduces to an elegant model that effectively captures the
relation between the number of processes, the number of failures, as well as
the topological structure of the task's simplicial complexes.Comment: Will appear at the Proceedings of the 46th Annual Symposium on the
Theory of Computing, STOC 201
- β¦