1,570 research outputs found
Optimal Hashing-based Time-Space Trade-offs for Approximate Near Neighbors
[See the paper for the full abstract.]
We show tight upper and lower bounds for time-space trade-offs for the
-Approximate Near Neighbor Search problem. For the -dimensional Euclidean
space and -point datasets, we develop a data structure with space and query time for
every such that: \begin{equation} c^2 \sqrt{\rho_q} +
(c^2 - 1) \sqrt{\rho_u} = \sqrt{2c^2 - 1}. \end{equation}
This is the first data structure that achieves sublinear query time and
near-linear space for every approximation factor , improving upon
[Kapralov, PODS 2015]. The data structure is a culmination of a long line of
work on the problem for all space regimes; it builds on Spherical
Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and
data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni,
Razenshteyn, STOC 2015].
Our matching lower bounds are of two types: conditional and unconditional.
First, we prove tightness of the whole above trade-off in a restricted model of
computation, which captures all known hashing-based approaches. We then show
unconditional cell-probe lower bounds for one and two probes that match the
above trade-off for , improving upon the best known lower bounds
from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first
space lower bound (for any static data structure) for two probes which is not
polynomially smaller than the one-probe bound. To show the result for two
probes, we establish and exploit a connection to locally-decodable codes.Comment: 62 pages, 5 figures; a merger of arXiv:1511.07527 [cs.DS] and
arXiv:1605.02701 [cs.DS], which subsumes both of the preprints. New version
contains more elaborated proofs and fixed some typo
Lower Bounds on Time-Space Trade-Offs for Approximate Near Neighbors
We show tight lower bounds for the entire trade-off between space and query
time for the Approximate Near Neighbor search problem. Our lower bounds hold in
a restricted model of computation, which captures all hashing-based approaches.
In articular, our lower bound matches the upper bound recently shown in
[Laarhoven 2015] for the random instance on a Euclidean sphere (which we show
in fact extends to the entire space using the techniques from
[Andoni, Razenshteyn 2015]).
We also show tight, unconditional cell-probe lower bounds for one and two
probes, improving upon the best known bounds from [Panigrahy, Talwar, Wieder
2010]. In particular, this is the first space lower bound (for any static data
structure) for two probes which is not polynomially smaller than for one probe.
To show the result for two probes, we establish and exploit a connection to
locally-decodable codes.Comment: 47 pages, 2 figures; v2: substantially revised introduction, lots of
small corrections; subsumed by arXiv:1608.03580 [cs.DS] (along with
arXiv:1511.07527 [cs.DS]
Outlaw distributions and locally decodable codes
Locally decodable codes (LDCs) are error correcting codes that allow for
decoding of a single message bit using a small number of queries to a corrupted
encoding. Despite decades of study, the optimal trade-off between query
complexity and codeword length is far from understood. In this work, we give a
new characterization of LDCs using distributions over Boolean functions whose
expectation is hard to approximate (in~~norm) with a small number of
samples. We coin the term `outlaw distributions' for such distributions since
they `defy' the Law of Large Numbers. We show that the existence of outlaw
distributions over sufficiently `smooth' functions implies the existence of
constant query LDCs and vice versa. We give several candidates for outlaw
distributions over smooth functions coming from finite field incidence
geometry, additive combinatorics and from hypergraph (non)expanders.
We also prove a useful lemma showing that (smooth) LDCs which are only
required to work on average over a random message and a random message index
can be turned into true LDCs at the cost of only constant factors in the
parameters.Comment: A preliminary version of this paper appeared in the proceedings of
ITCS 201
Continuous groups of transversal gates for quantum error correcting codes from finite clock reference frames
Following the introduction of the task of reference frame error correction,
we show how, by using reference frame alignment with clocks, one can add a
continuous Abelian group of transversal logical gates to any error-correcting
code. With this we further explore a way of circumventing the no-go theorem of
Eastin and Knill, which states that if local errors are correctable, the group
of transversal gates must be of finite order. We are able to do this by
introducing a small error on the decoding procedure that decreases with the
dimension of the frames used. Furthermore, we show that there is a direct
relationship between how small this error can be and how accurate quantum
clocks can be: the more accurate the clock, the smaller the error; and the
no-go theorem would be violated if time could be measured perfectly in quantum
mechanics. The asymptotic scaling of the error is studied under a number of
scenarios of reference frames and error models. The scheme is also extended to
errors at unknown locations, and we show how to achieve this by simple majority
voting related error correction schemes on the reference frames. In the
Outlook, we discuss our results in relation to the AdS/CFT correspondence and
the Page-Wooters mechanism.Comment: 10+35 pages. Also see related work uploaded to the arXiv on the same
day; arXiv:1902.0771
Inferring Energy Bounds via Static Program Analysis and Evolutionary Modeling of Basic Blocks
The ever increasing number and complexity of energy-bound devices (such as
the ones used in Internet of Things applications, smart phones, and mission
critical systems) pose an important challenge on techniques to optimize their
energy consumption and to verify that they will perform their function within
the available energy budget. In this work we address this challenge from the
software point of view and propose a novel parametric approach to estimating
tight bounds on the energy consumed by program executions that are practical
for their application to energy verification and optimization. Our approach
divides a program into basic (branchless) blocks and estimates the maximal and
minimal energy consumption for each block using an evolutionary algorithm. Then
it combines the obtained values according to the program control flow, using
static analysis, to infer functions that give both upper and lower bounds on
the energy consumption of the whole program and its procedures as functions on
input data sizes. We have tested our approach on (C-like) embedded programs
running on the XMOS hardware platform. However, our method is general enough to
be applied to other microprocessor architectures and programming languages. The
bounds obtained by our prototype implementation can be tight while remaining on
the safe side of budgets in practice, as shown by our experimental evaluation.Comment: Pre-proceedings paper presented at the 27th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur,
Belgium, 10-12 October 2017 (arXiv:1708.07854). Improved version of the one
presented at the HIP3ES 2016 workshop (v1): more experimental results (added
benchmark to Table 1, added figure for new benchmark, added Table 3),
improved Fig. 1, added Fig.
Interference Mitigation Through Limited Receiver Cooperation
Interference is a major issue limiting the performance in wireless networks.
Cooperation among receivers can help mitigate interference by forming
distributed MIMO systems. The rate at which receivers cooperate, however, is
limited in most scenarios. How much interference can one bit of receiver
cooperation mitigate? In this paper, we study the two-user Gaussian
interference channel with conferencing decoders to answer this question in a
simple setting. We identify two regions regarding the gain from receiver
cooperation: linear and saturation regions. In the linear region receiver
cooperation is efficient and provides a degrees-of-freedom gain, which is
either one cooperation bit buys one more bit or two cooperation bits buy one
more bit until saturation. In the saturation region receiver cooperation is
inefficient and provides a power gain, which is at most a constant regardless
of the rate at which receivers cooperate. The conclusion is drawn from the
characterization of capacity region to within two bits. The proposed strategy
consists of two parts: (1) the transmission scheme, where superposition
encoding with a simple power split is employed, and (2) the cooperative
protocol, where one receiver quantize-bin-and-forwards its received signal, and
the other after receiving the side information decode-bin-and-forwards its
received signal.Comment: Submitted to IEEE Transactions on Information Theory. 69 pages, 14
figure
Fast Deterministic Selection
The Median of Medians (also known as BFPRT) algorithm, although a landmark
theoretical achievement, is seldom used in practice because it and its variants
are slower than simple approaches based on sampling. The main contribution of
this paper is a fast linear-time deterministic selection algorithm
QuickselectAdaptive based on a refined definition of MedianOfMedians. The
algorithm's performance brings deterministic selection---along with its
desirable properties of reproducible runs, predictable run times, and immunity
to pathological inputs---in the range of practicality. We demonstrate results
on independent and identically distributed random inputs and on
normally-distributed inputs. Measurements show that QuickselectAdaptive is
faster than state-of-the-art baselines.Comment: Pre-publication draf
Approximate Sum-Capacity of K-user Cognitive Interference Channels with Cumulative Message Sharing
This paper considers the K user cognitive interference channel with one
primary and K-1 secondary/cognitive transmitters with a cumulative message
sharing structure, i.e cognitive transmitter knows non-causally
all messages of the users with index less than i. We propose a computable outer
bound valid for any memoryless channel. We first evaluate the sum-rate outer
bound for the high- SNR linear deterministic approximation of the Gaussian
noise channel. This is shown to be capacity for the 3-user channel with
arbitrary channel gains and the sum-capacity for the symmetric K-user channel.
Interestingly. for the K user channel having only the K th cognitive know all
the other messages is sufficient to achieve capacity i.e cognition at
transmitter 2 to K-1 is not needed. Next the sum capacity of the symmetric
Gaussian noise channel is characterized to within a constant additive and
multiplicative gap. The proposed achievable scheme for the additive gap is
based on Dirty paper coding and can be thought of as a MIMO-broadcast scheme
where only one encoding order is possible due to the message sharing structure.
As opposed to other multiuser interference channel models, a single scheme
suffices for both the weak and strong interference regimes. With this scheme
the generalized degrees of freedom (gDOF) is shown to be a function of K, in
contrast to the non cognitive case and the broadcast channel case.
Interestingly, it is show that as the number of users grows to infinity the
gDoF of the K-user cognitive interference channel with cumulative message
sharing tends to the gDoF of a broadcast channel with a K-antenna transmitter
and K single-antenna receivers. The analytical additive additive and
multiplicative gaps are a function of the number of users. Numerical
evaluations of inner and outer bounds show that the actual gap is less than the
analytical one.Comment: Journa
- âŠ