59,713 research outputs found
An improved 1D area law for frustration-free systems
We present a new proof for the 1D area law for frustration-free systems with
a constant gap, which exponentially improves the entropy bound in Hastings' 1D
area law, and which is tight to within a polynomial factor. For particles of
dimension , spectral gap and interaction strength of at most
, our entropy bound is S_{1D}\le \orderof{1}X^3\log^8 X where
X\EqDef(J\log d)/\epsilon. Our proof is completely combinatorial, combining
the detectability lemma with basic tools from approximation theory.
Incorporating locality into the proof when applied to the 2D case gives an
entanglement bound that is at the cusp of being non-trivial in the sense that
any further improvement would yield a sub-volume law.Comment: 15 pages, 6 figures. Some small style corrections and updated ref
Toric Border Bases
We extend the theory and the algorithms of Border Bases to systems of Laurent
polynomial equations, defining "toric" roots. Instead of introducing new
variables and new relations to saturate by the variable inverses, we propose a
more efficient approach which works directly with the variables and their
inverse. We show that the commutation relations and the inversion relations
characterize toric border bases. We explicitly describe the first syzygy module
associated to a toric border basis in terms of these relations. Finally, a new
border basis algorithm for Laurent polynomials is described and a proof of its
termination is given for zero-dimensional toric ideals
DeepSecure: Scalable Provably-Secure Deep Learning
This paper proposes DeepSecure, a novel framework that enables scalable
execution of the state-of-the-art Deep Learning (DL) models in a
privacy-preserving setting. DeepSecure targets scenarios in which neither of
the involved parties including the cloud servers that hold the DL model
parameters or the delegating clients who own the data is willing to reveal
their information. Our framework is the first to empower accurate and scalable
DL analysis of data generated by distributed clients without sacrificing the
security to maintain efficiency. The secure DL computation in DeepSecure is
performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized
realization of various components used in DL. Our optimized implementation
achieves more than 58-fold higher throughput per sample compared with the
best-known prior solution. In addition to our optimized GC realization, we
introduce a set of novel low-overhead pre-processing techniques which further
reduce the GC overall runtime in the context of deep learning. Extensive
evaluations of various DL applications demonstrate up to two
orders-of-magnitude additional runtime improvement achieved as a result of our
pre-processing methodology. This paper also provides mechanisms to securely
delegate GC computations to a third party in constrained embedded settings
- …