743 research outputs found
Synthesizing Functional Reactive Programs
Functional Reactive Programming (FRP) is a paradigm that has simplified the
construction of reactive programs. There are many libraries that implement
incarnations of FRP, using abstractions such as Applicative, Monads, and
Arrows. However, finding a good control flow, that correctly manages state and
switches behaviors at the right times, still poses a major challenge to
developers. An attractive alternative is specifying the behavior instead of
programming it, as made possible by the recently developed logic: Temporal
Stream Logic (TSL). However, it has not been explored so far how Control Flow
Models (CFMs), as synthesized from TSL specifications, can be turned into
executable code that is compatible with libraries building on FRP. We bridge
this gap, by showing that CFMs are indeed a suitable formalism to be turned
into Applicative, Monadic, and Arrowized FRP. We demonstrate the effectiveness
of our translations on a real-world kitchen timer application, which we
translate to a desktop application using the Arrowized FRP library Yampa, a web
application using the Monadic threepenny-gui library, and to hardware using the
Applicative hardware description language ClaSH.Comment: arXiv admin note: text overlap with arXiv:1712.0024
Generalized Paxos Made Byzantine (and Less Complex)
One of the most recent members of the Paxos family of protocols is
Generalized Paxos. This variant of Paxos has the characteristic that it departs
from the original specification of consensus, allowing for a weaker safety
condition where different processes can have a different views on a sequence
being agreed upon. However, much like the original Paxos counterpart,
Generalized Paxos does not have a simple implementation. Furthermore, with the
recent practical adoption of Byzantine fault tolerant protocols, it is timely
and important to understand how Generalized Paxos can be implemented in the
Byzantine model. In this paper, we make two main contributions. First, we
provide a description of Generalized Paxos that is easier to understand, based
on a simpler specification and the pseudocode for a solution that can be
readily implemented. Second, we extend the protocol to the Byzantine fault
model
On Thin Air Reads: Towards an Event Structures Model of Relaxed Memory
To model relaxed memory, we propose confusion-free event structures over an
alphabet with a justification relation. Executions are modeled by justified
configurations, where every read event has a justifying write event.
Justification alone is too weak a criterion, since it allows cycles of the kind
that result in so-called thin-air reads. Acyclic justification forbids such
cycles, but also invalidates event reorderings that result from compiler
optimizations and dynamic instruction scheduling. We propose the notion of
well-justification, based on a game-like model, which strikes a middle ground.
We show that well-justified configurations satisfy the DRF theorem: in any
data-race free program, all well-justified configurations are sequentially
consistent. We also show that rely-guarantee reasoning is sound for
well-justified configurations, but not for justified configurations. For
example, well-justified configurations are type-safe.
Well-justification allows many, but not all reorderings performed by relaxed
memory. In particular, it fails to validate the commutation of independent
reads. We discuss variations that may address these shortcomings
MDCC: Multi-Data Center Consistency
Replicating data across multiple data centers not only allows moving the data
closer to the user and, thus, reduces latency for applications, but also
increases the availability in the event of a data center failure. Therefore, it
is not surprising that companies like Google, Yahoo, and Netflix already
replicate user data across geographically different regions.
However, replication across data centers is expensive. Inter-data center
network delays are in the hundreds of milliseconds and vary significantly.
Synchronous wide-area replication is therefore considered to be unfeasible with
strong consistency and current solutions either settle for asynchronous
replication which implies the risk of losing data in the event of failures,
restrict consistency to small partitions, or give up consistency entirely. With
MDCC (Multi-Data Center Consistency), we describe the first optimistic commit
protocol, that does not require a master or partitioning, and is strongly
consistent at a cost similar to eventually consistent protocols. MDCC can
commit transactions in a single round-trip across data centers in the normal
operational case. We further propose a new programming model which empowers the
application developer to handle longer and unpredictable latencies caused by
inter-data center communication. Our evaluation using the TPC-W benchmark with
MDCC deployed across 5 geographically diverse data centers shows that MDCC is
able to achieve throughput and latency similar to eventually consistent quorum
protocols and that MDCC is able to sustain a data center outage without a
significant impact on response times while guaranteeing strong consistency
Polly's Polyhedral Scheduling in the Presence of Reductions
The polyhedral model provides a powerful mathematical abstraction to enable
effective optimization of loop nests with respect to a given optimization goal,
e.g., exploiting parallelism. Unexploited reduction properties are a frequent
reason for polyhedral optimizers to assume parallelism prohibiting dependences.
To our knowledge, no polyhedral loop optimizer available in any production
compiler provides support for reductions. In this paper, we show that
leveraging the parallelism of reductions can lead to a significant performance
increase. We give a precise, dependence based, definition of reductions and
discuss ways to extend polyhedral optimization to exploit the associativity and
commutativity of reduction computations. We have implemented a
reduction-enabled scheduling approach in the Polly polyhedral optimizer and
evaluate it on the standard Polybench 3.2 benchmark suite. We were able to
detect and model all 52 arithmetic reductions and achieve speedups up to
2.21 on a quad core machine by exploiting the multidimensional
reduction in the BiCG benchmark.Comment: Presented at the IMPACT15 worksho
Tropical Limits of Probability Spaces, Part I: The Intrinsic Kolmogorov-Sinai Distance and the Asymptotic Equipartition Property for Configurations
The entropy of a finite probability space measures the observable
cardinality of large independent products of the probability
space. If two probability spaces and have the same entropy, there is an
almost measure-preserving bijection between large parts of and
. In this way, and are asymptotically equivalent.
It turns out to be challenging to generalize this notion of asymptotic
equivalence to configurations of probability spaces, which are collections of
probability spaces with measure-preserving maps between some of them.
In this article we introduce the intrinsic Kolmogorov-Sinai distance on the
space of configurations of probability spaces. Concentrating on the large-scale
geometry we pass to the asymptotic Kolmogorov-Sinai distance. It induces an
asymptotic equivalence relation on sequences of configurations of probability
spaces. We will call the equivalence classes \emph{tropical probability
spaces}.
In this context we prove an Asymptotic Equipartition Property for
configurations. It states that tropical configurations can always be
approximated by homogeneous configurations. In addition, we show that the
solutions to certain Information-Optimization problems are
Lipschitz-con\-tinuous with respect to the asymptotic Kolmogorov-Sinai
distance. It follows from these two statements that in order to solve an
Information-Optimization problem, it suffices to consider homogeneous
configurations.
Finally, we show that spaces of trajectories of length of certain
stochastic processes, in particular stationary Markov chains, have a tropical
limit.Comment: Comment to version 2: Fixed typos, a calculation mistake in Lemma 5.1
and its consequences in Proposition 5.2 and Theorem 6.
Theoretical framework for quantum networks
We present a framework to treat quantum networks and all possible
transformations thereof, including as special cases all possible manipulations
of quantum states, measurements, and channels, such as, e.g., cloning,
discrimination, estimation, and tomography. Our framework is based on the
concepts of quantum comb-which describes all transformations achievable by a
given quantum network-and link product-the operation of connecting two quantum
networks. Quantum networks are treated both from a constructive point of
view-based on connections of elementary circuits-and from an axiomatic
one-based on a hierarchy of admissible quantum maps. In the axiomatic context a
fundamental property is shown, which we call universality of quantum memory
channels: any admissible transformation of quantum networks can be realized by
a suitable sequence of memory channels. The open problem whether this property
fails for some nonquantum theory, e.g., for no-signaling boxes, is posed.Comment: 23 pages, revtex
- …