14,743 research outputs found
Eigenvector Synchronization, Graph Rigidity and the Molecule Problem
The graph realization problem has received a great deal of attention in
recent years, due to its importance in applications such as wireless sensor
networks and structural biology. In this paper, we extend on previous work and
propose the 3D-ASAP algorithm, for the graph realization problem in
, given a sparse and noisy set of distance measurements. 3D-ASAP
is a divide and conquer, non-incremental and non-iterative algorithm, which
integrates local distance information into a global structure determination.
Our approach starts with identifying, for every node, a subgraph of its 1-hop
neighborhood graph, which can be accurately embedded in its own coordinate
system. In the noise-free case, the computed coordinates of the sensors in each
patch must agree with their global positioning up to some unknown rigid motion,
that is, up to translation, rotation and possibly reflection. In other words,
to every patch there corresponds an element of the Euclidean group Euc(3) of
rigid transformations in , and the goal is to estimate the group
elements that will properly align all the patches in a globally consistent way.
Furthermore, 3D-ASAP successfully incorporates information specific to the
molecule problem in structural biology, in particular information on known
substructures and their orientation. In addition, we also propose 3D-SP-ASAP, a
faster version of 3D-ASAP, which uses a spectral partitioning algorithm as a
preprocessing step for dividing the initial graph into smaller subgraphs. Our
extensive numerical simulations show that 3D-ASAP and 3D-SP-ASAP are very
robust to high levels of noise in the measured distances and to sparse
connectivity in the measurement graph, and compare favorably to similar
state-of-the art localization algorithms.Comment: 49 pages, 8 figure
Magic-State Functional Units: Mapping and Scheduling Multi-Level Distillation Circuits for Fault-Tolerant Quantum Architectures
Quantum computers have recently made great strides and are on a long-term
path towards useful fault-tolerant computation. A dominant overhead in
fault-tolerant quantum computation is the production of high-fidelity encoded
qubits, called magic states, which enable reliable error-corrected computation.
We present the first detailed designs of hardware functional units that
implement space-time optimized magic-state factories for surface code
error-corrected machines. Interactions among distant qubits require surface
code braids (physical pathways on chip) which must be routed. Magic-state
factories are circuits comprised of a complex set of braids that is more
difficult to route than quantum circuits considered in previous work [1]. This
paper explores the impact of scheduling techniques, such as gate reordering and
qubit renaming, and we propose two novel mapping techniques: braid repulsion
and dipole moment braid rotation. We combine these techniques with graph
partitioning and community detection algorithms, and further introduce a
stitching algorithm for mapping subgraphs onto a physical machine. Our results
show a factor of 5.64 reduction in space-time volume compared to the best-known
previous designs for magic-state factories.Comment: 13 pages, 10 figure
Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis
Database theory and database practice are typically the domain of computer
scientists who adopt what may be termed an algorithmic perspective on their
data. This perspective is very different than the more statistical perspective
adopted by statisticians, scientific computers, machine learners, and other who
work on what may be broadly termed statistical data analysis. In this article,
I will address fundamental aspects of this algorithmic-statistical disconnect,
with an eye to bridging the gap between these two very different approaches. A
concept that lies at the heart of this disconnect is that of statistical
regularization, a notion that has to do with how robust is the output of an
algorithm to the noise properties of the input data. Although it is nearly
completely absent from computer science, which historically has taken the input
data as given and modeled algorithms discretely, regularization in one form or
another is central to nearly every application domain that applies algorithms
to noisy data. By using several case studies, I will illustrate, both
theoretically and empirically, the nonobvious fact that approximate
computation, in and of itself, can implicitly lead to statistical
regularization. This and other recent work suggests that, by exploiting in a
more principled way the statistical properties implicit in worst-case
algorithms, one can in many cases satisfy the bicriteria of having algorithms
that are scalable to very large-scale databases and that also have good
inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles
of Database Systems (PODS 2012
Optimal construction of k-nearest neighbor graphs for identifying noisy clusters
We study clustering algorithms based on neighborhood graphs on a random
sample of data points. The question we ask is how such a graph should be
constructed in order to obtain optimal clustering results. Which type of
neighborhood graph should one choose, mutual k-nearest neighbor or symmetric
k-nearest neighbor? What is the optimal parameter k? In our setting, clusters
are defined as connected components of the t-level set of the underlying
probability distribution. Clusters are said to be identified in the
neighborhood graph if connected components in the graph correspond to the true
underlying clusters. Using techniques from random geometric graph theory, we
prove bounds on the probability that clusters are identified successfully, both
in a noise-free and in a noisy setting. Those bounds lead to several
conclusions. First, k has to be chosen surprisingly high (rather of the order n
than of the order log n) to maximize the probability of cluster identification.
Secondly, the major difference between the mutual and the symmetric k-nearest
neighbor graph occurs when one attempts to detect the most significant cluster
only.Comment: 31 pages, 2 figure
Spectral Thresholds in the Bipartite Stochastic Block Model
We consider a bipartite stochastic block model on vertex sets and
, with planted partitions in each, and ask at what densities efficient
algorithms can recover the partition of the smaller vertex set.
When , multiple thresholds emerge. We first locate a sharp
threshold for detection of the partition, in the sense of the results of
\cite{mossel2012stochastic,mossel2013proof} and \cite{massoulie2014community}
for the stochastic block model. We then show that at a higher edge density, the
singular vectors of the rectangular biadjacency matrix exhibit a localization /
delocalization phase transition, giving recovery above the threshold and no
recovery below. Nevertheless, we propose a simple spectral algorithm, Diagonal
Deletion SVD, which recovers the partition at a nearly optimal edge density.
The bipartite stochastic block model studied here was used by
\cite{feldman2014algorithm} to give a unified algorithm for recovering planted
partitions and assignments in random hypergraphs and random -SAT formulae
respectively. Our results give the best known bounds for the clause density at
which solutions can be found efficiently in these models as well as showing a
barrier to further improvement via this reduction to the bipartite block model.Comment: updated version, will appear in COLT 201
- …