5,682 research outputs found
Normal Factor Graphs and Holographic Transformations
This paper stands at the intersection of two distinct lines of research. One
line is "holographic algorithms," a powerful approach introduced by Valiant for
solving various counting problems in computer science; the other is "normal
factor graphs," an elegant framework proposed by Forney for representing codes
defined on graphs. We introduce the notion of holographic transformations for
normal factor graphs, and establish a very general theorem, called the
generalized Holant theorem, which relates a normal factor graph to its
holographic transformation. We show that the generalized Holant theorem on the
one hand underlies the principle of holographic algorithms, and on the other
hand reduces to a general duality theorem for normal factor graphs, a special
case of which was first proved by Forney. In the course of our development, we
formalize a new semantics for normal factor graphs, which highlights various
linear algebraic properties that potentially enable the use of normal factor
graphs as a linear algebraic tool.Comment: To appear IEEE Trans. Inform. Theor
Bipartite all-versus-nothing proofs of Bell's theorem with single-qubit measurements
If we distribute n qubits between two parties, which quantum pure states and
distributions of qubits would allow all-versus-nothing (or
Greenberger-Horne-Zeilinger-like) proofs of Bell's theorem using only
single-qubit measurements? We show a necessary and sufficient condition for the
existence of these proofs for any number of qubits, and provide all distinct
proofs up to n=7 qubits. Remarkably, there is only one distribution of a state
of n=4 qubits, and six distributions, each for a different state of n=6 qubits,
which allow these proofs.Comment: REVTeX4, 4 pages, 2 figure
Partitioning the vertex set of to make an efficient open domination graph
A graph is an efficient open domination graph if there exists a subset of
vertices whose open neighborhoods partition its vertex set. We characterize
those graphs for which the Cartesian product is an efficient
open domination graph when is a complete graph of order at least 3 or a
complete bipartite graph. The characterization is based on the existence of a
certain type of weak partition of . For the class of trees when is
complete of order at least 3, the characterization is constructive. In
addition, a special type of efficient open domination graph is characterized
among Cartesian products when is a 5-cycle or a 4-cycle.Comment: 16 pages, 2 figure
Online Tensor Methods for Learning Latent Variable Models
We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.Comment: JMLR 201
Multi-path Summation for Decoding 2D Topological Codes
Fault tolerance is a prerequisite for scalable quantum computing.
Architectures based on 2D topological codes are effective for near-term
implementations of fault tolerance. To obtain high performance with these
architectures, we require a decoder which can adapt to the wide variety of
error models present in experiments. The typical approach to the problem of
decoding the surface code is to reduce it to minimum-weight perfect matching in
a way that provides a suboptimal threshold error rate, and is specialized to
correct a specific error model. Recently, optimal threshold error rates for a
variety of error models have been obtained by methods which do not use
minimum-weight perfect matching, showing that such thresholds can be achieved
in polynomial time. It is an open question whether these results can also be
achieved by minimum-weight perfect matching. In this work, we use belief
propagation and a novel algorithm for producing edge weights to increase the
utility of minimum-weight perfect matching for decoding surface codes. This
allows us to correct depolarizing errors using the rotated surface code,
obtaining a threshold of . This is larger than the threshold
achieved by previous matching-based decoders (), though
still below the known upper bound of .Comment: 19 pages, 13 figures, published in Quantum, available at
https://quantum-journal.org/papers/q-2018-10-19-102
Tema Con Variazioni: Quantum Channel Capacity
Channel capacity describes the size of the nearly ideal channels, which can
be obtained from many uses of a given channel, using an optimal error
correcting code. In this paper we collect and compare minor and major
variations in the mathematically precise statements of this idea which have
been put forward in the literature. We show that all the variations considered
lead to equivalent capacity definitions. In particular, it makes no difference
whether one requires mean or maximal errors to go to zero, and it makes no
difference whether errors are required to vanish for any sequence of block
sizes compatible with the rate, or only for one infinite sequence.Comment: 32 pages, uses iopart.cl
How to correct small quantum errors
The theory of quantum error correction is a cornerstone of quantum
information processing. It shows that quantum data can be protected against
decoherence effects, which otherwise would render many of the new quantum
applications practically impossible. In this paper we give a self contained
introduction to this theory and to the closely related concept of quantum
channel capacities. We show, in particular, that it is possible (using
appropriate error correcting schemes) to send a non-vanishing amount of quantum
data undisturbed (in a certain asymptotic sense) through a noisy quantum
channel T, provided the errors produced by T are small enough.Comment: LaTeX2e, 23 pages, 6 figures (3 eps, 3 pstricks
- …