22,838 research outputs found
On the Approximability of Digraph Ordering
Given an n-vertex digraph D = (V, A) the Max-k-Ordering problem is to compute
a labeling maximizing the number of forward edges, i.e.
edges (u,v) such that (u) < (v). For different values of k, this
reduces to Maximum Acyclic Subgraph (k=n), and Max-Dicut (k=2). This work
studies the approximability of Max-k-Ordering and its generalizations,
motivated by their applications to job scheduling with soft precedence
constraints. We give an LP rounding based 2-approximation algorithm for
Max-k-Ordering for any k={2,..., n}, improving on the known
2k/(k-1)-approximation obtained via random assignment. The tightness of this
rounding is shown by proving that for any k={2,..., n} and constant
, Max-k-Ordering has an LP integrality gap of 2 -
for rounds of the
Sherali-Adams hierarchy.
A further generalization of Max-k-Ordering is the restricted maximum acyclic
subgraph problem or RMAS, where each vertex v has a finite set of allowable
labels . We prove an LP rounding based
approximation for it, improving on the
approximation recently given by Grandoni et al.
(Information Processing Letters, Vol. 115(2), Pages 182-185, 2015). In fact,
our approximation algorithm also works for a general version where the
objective counts the edges which go forward by at least a positive offset
specific to each edge.
The minimization formulation of digraph ordering is DAG edge deletion or
DED(k), which requires deleting the minimum number of edges from an n-vertex
directed acyclic graph (DAG) to remove all paths of length k. We show that
both, the LP relaxation and a local ratio approach for DED(k) yield
k-approximation for any .Comment: 21 pages, Conference version to appear in ESA 201
Distributed Online Big Data Classification Using Context Information
Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We model the problem of joint classification by the distributed and
heterogeneous learners from multiple data sources as a distributed contextual
bandit problem where each data is characterized by a specific context. We
develop a distributed online learning algorithm for which we can prove
sublinear regret. Compared to prior work in distributed online data mining, our
work is the first to provide analytic regret results characterizing the
performance of the proposed algorithm
FALKON: An Optimal Large Scale Kernel Method
Kernel methods provide a principled way to perform non linear, nonparametric
learning. They rely on solid functional analytic foundations and enjoy optimal
statistical properties. However, at least in their basic form, they have
limited applicability in large scale scenarios because of stringent
computational requirements in terms of time and especially memory. In this
paper, we take a substantial step in scaling up kernel methods, proposing
FALKON, a novel algorithm that allows to efficiently process millions of
points. FALKON is derived combining several algorithmic principles, namely
stochastic subsampling, iterative solvers and preconditioning. Our theoretical
analysis shows that optimal statistical accuracy is achieved requiring
essentially memory and time. An extensive experimental
analysis on large scale datasets shows that, even with a single machine, FALKON
outperforms previous state of the art solutions, which exploit
parallel/distributed architectures.Comment: NIPS 201
Codes and Protocols for Distilling , controlled-, and Toffoli Gates
We present several different codes and protocols to distill ,
controlled-, and Toffoli (or ) gates. One construction is based on
codes that generalize the triorthogonal codes, allowing any of these gates to
be induced at the logical level by transversal . We present a randomized
construction of generalized triorthogonal codes obtaining an asymptotic
distillation efficiency . We also present a Reed-Muller
based construction of these codes which obtains a worse but performs
well at small sizes. Additionally, we present protocols based on checking the
stabilizers of magic states at the logical level by transversal gates
applied to codes; these protocols generalize the protocols of 1703.07847.
Several examples, including a Reed-Muller code for -to-Toffoli distillation,
punctured Reed-Muller codes for -gate distillation, and some of the check
based protocols, require a lower ratio of input gates to output gates than
other known protocols at the given order of error correction for the given code
size. In particular, we find a T-gate to Toffoli gate code with
distance as well as triorthogonal codes with parameters
with very low prefactors in front of
the leading order error terms in those codes.Comment: 28 pages. (v2) fixed a part of the proof on random triorthogonal
codes, added comments on Clifford circuits for Reed-Muller states (v3) minor
chang
- …