3,170,098 research outputs found
Reliable Quantum Computers
The new field of quantum error correction has developed spectacularly since
its origin less than two years ago. Encoded quantum information can be
protected from errors that arise due to uncontrolled interactions with the
environment. Recovery from errors can work effectively even if occasional
mistakes occur during the recovery procedure. Furthermore, encoded quantum
information can be processed without serious propagation of errors. Hence, an
arbitrarily long quantum computation can be performed reliably, provided that
the average probability of error per quantum gate is less than a certain
critical value, the accuracy threshold. A quantum computer storing about 10^6
qubits, with a probability of error per quantum gate of order 10^{-6}, would be
a formidable factoring engine. Even a smaller, less accurate quantum computer
would be able to perform many useful tasks. (This paper is based on a talk
presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18
December 1996.)Comment: 24 pages, LaTeX, submitted to Proc. Roy. Soc. Lond. A, minor
correction
Reliable scientific service compositions
Abstract. Distributed service oriented architectures (SOAs) are increas-ingly used by users, who are insufficiently skilled in the art of distributed system programming. A good example are computational scientists who build large-scale distributed systems using service-oriented Grid comput-ing infrastructures. Computational scientists use these infrastructure to build scientific applications, which are composed from basic Web ser-vices into larger orchestrations using workflow languages, such as the Business Process Execution Language. For these users reliability of the infrastructure is of significant importance and that has to be provided in the presence of hardware or operational failures. The primitives avail-able to achieve such reliability currently leave much to be desired by users who do not necessarily have a strong education in distributed sys-tem construction. We characterise scientific service compositions and the environment they operate in by introducing the notion of global scien-tific BPEL workflows. We outline the threats to the reliability of such workflows and discuss the limited support that available specifications and mechanisms provide to achieve reliability. Furthermore, we propose a line of research to address the identified issues by investigating auto-nomic mechanisms that assist computational scientists in building, exe-cuting and maintaining reliable workflows.
Methods for Reliable Teleportation
Recent experimental results and proposals towards implementation of quantum
teleportation are discussed. It is proved that reliable (theoretically, 100%
probability of success) teleportation cannot be achieved using the methods
applied in recent experiments, i.e., without quantum systems interacting one
with the other. Teleportation proposal involving atoms and electro-magnetic
cavities are reviewed and the most feasible methods are described. In
particular, the language of nonlocal measurements has been applied which has
also been used for presenting a method for teleportation of quantum states of
systems with continuous variables.Comment: 11 pages, 5eps figure
Scalable Byzantine Reliable Broadcast
Byzantine reliable broadcast is a powerful primitive that allows a set of processes to agree on a message from a designated sender, even if some processes (including the sender) are Byzantine. Existing broadcast protocols for this setting scale poorly, as they typically build on quorum systems with strong intersection guarantees, which results in linear per-process communication and computation complexity.
We generalize the Byzantine reliable broadcast abstraction to the probabilistic setting, allowing each of its properties to be violated with a fixed, arbitrarily small probability. We leverage these relaxed guarantees in a protocol where we replace quorums with stochastic samples. Compared to quorums, samples are significantly smaller in size, leading to a more scalable design. We obtain the first Byzantine reliable broadcast protocol with logarithmic per-process communication and computation complexity.
We conduct a complete and thorough analysis of our protocol, deriving bounds on the probability of each of its properties being compromised. During our analysis, we introduce a novel general technique that we call adversary decorators. Adversary decorators allow us to make claims about the optimal strategy of the Byzantine adversary without imposing any additional assumptions. We also introduce Threshold Contagion, a model of message propagation through a system with Byzantine processes. To the best of our knowledge, this is the first formal analysis of a probabilistic broadcast protocol in the Byzantine fault model. We show numerically that practically negligible failure probabilities can be achieved with realistic security parameters
Boolean networks with reliable dynamics
We investigated the properties of Boolean networks that follow a given
reliable trajectory in state space. A reliable trajectory is defined as a
sequence of states which is independent of the order in which the nodes are
updated. We explored numerically the topology, the update functions, and the
state space structure of these networks, which we constructed using a minimum
number of links and the simplest update functions. We found that the clustering
coefficient is larger than in random networks, and that the probability
distribution of three-node motifs is similar to that found in gene regulation
networks. Among the update functions, only a subset of all possible functions
occur, and they can be classified according to their probability. More
homogeneous functions occur more often, leading to a dominance of canalyzing
functions. Finally, we studied the entire state space of the networks. We
observed that with increasing systems size, fixed points become more dominant,
moving the networks close to the frozen phase.Comment: 11 Pages, 15 figure
- âŠ