15,778 research outputs found
Tradeoffs for nearest neighbors on the sphere
We consider tradeoffs between the query and update complexities for the
(approximate) nearest neighbor problem on the sphere, extending the recent
spherical filters to sparse regimes and generalizing the scheme and analysis to
account for different tradeoffs. In a nutshell, for the sparse regime the
tradeoff between the query complexity and update complexity
for data sets of size is given by the following equation in
terms of the approximation factor and the exponents and :
For small , minimizing the time for updates leads to a linear
space complexity at the cost of a query time complexity .
Balancing the query and update costs leads to optimal complexities
, matching bounds from [Andoni-Razenshteyn, 2015] and [Dubiner,
IEEE-TIT'10] and matching the asymptotic complexities of [Andoni-Razenshteyn,
STOC'15] and [Andoni-Indyk-Laarhoven-Razenshteyn-Schmidt, NIPS'15]. A
subpolynomial query time complexity can be achieved at the cost of a
space complexity of the order , matching the bound
of [Andoni-Indyk-Patrascu, FOCS'06] and
[Panigrahy-Talwar-Wieder, FOCS'10] and improving upon results of
[Indyk-Motwani, STOC'98] and [Kushilevitz-Ostrovsky-Rabani, STOC'98].
For large , minimizing the update complexity results in a query complexity
of , improving upon the related exponent for large of
[Kapralov, PODS'15] by a factor , and matching the bound
of [Panigrahy-Talwar-Wieder, FOCS'08]. Balancing the costs leads to optimal
complexities , while a minimum query time complexity can be
achieved with update complexity , improving upon the
previous best exponents of Kapralov by a factor .Comment: 16 pages, 1 table, 2 figures. Mostly subsumed by arXiv:1608.03580
[cs.DS] (along with arXiv:1605.02701 [cs.DS]
Dynamic Traitor Tracing Schemes, Revisited
We revisit recent results from the area of collusion-resistant traitor
tracing, and show how they can be combined and improved to obtain more
efficient dynamic traitor tracing schemes. In particular, we show how the
dynamic Tardos scheme of Laarhoven et al. can be combined with the optimized
score functions of Oosterwijk et al. to trace coalitions much faster. If the
attack strategy is known, in many cases the order of the code length goes down
from quadratic to linear in the number of colluders, while if the attack is not
known, we show how the interleaving defense may be used to catch all colluders
about twice as fast as in the dynamic Tardos scheme. Some of these results also
apply to the static traitor tracing setting where the attack strategy is known
in advance, and to group testing.Comment: 7 pages, 1 figure (6 subfigures), 1 tabl
Faster tuple lattice sieving using spherical locality-sensitive filters
To overcome the large memory requirement of classical lattice sieving
algorithms for solving hard lattice problems, Bai-Laarhoven-Stehl\'{e} [ANTS
2016] studied tuple lattice sieving, where tuples instead of pairs of lattice
vectors are combined to form shorter vectors. Herold-Kirshanova [PKC 2017]
recently improved upon their results for arbitrary tuple sizes, for example
showing that a triple sieve can solve the shortest vector problem (SVP) in
dimension in time , using a technique similar to
locality-sensitive hashing for finding nearest neighbors.
In this work, we generalize the spherical locality-sensitive filters of
Becker-Ducas-Gama-Laarhoven [SODA 2016] to obtain space-time tradeoffs for near
neighbor searching on dense data sets, and we apply these techniques to tuple
lattice sieving to obtain even better time complexities. For instance, our
triple sieve heuristically solves SVP in time . For
practical sieves based on Micciancio-Voulgaris' GaussSieve [SODA 2010], this
shows that a triple sieve uses less space and less time than the current best
near-linear space double sieve.Comment: 12 pages + references, 2 figures. Subsumed/merged into Cryptology
ePrint Archive 2017/228, available at https://ia.cr/2017/122
Asymptotics of Fingerprinting and Group Testing: Tight Bounds from Channel Capacities
In this work we consider the large-coalition asymptotics of various
fingerprinting and group testing games, and derive explicit expressions for the
capacities for each of these models. We do this both for simple decoders (fast
but suboptimal) and for joint decoders (slow but optimal).
For fingerprinting, we show that if the pirate strategy is known, the
capacity often decreases linearly with the number of colluders, instead of
quadratically as in the uninformed fingerprinting game. For many attacks the
joint capacity is further shown to be strictly higher than the simple capacity.
For group testing, we improve upon known results about the joint capacities,
and derive new explicit asymptotics for the simple capacities. These show that
existing simple group testing algorithms are suboptimal, and that simple
decoders cannot asymptotically be as efficient as joint decoders. For the
traditional group testing model, we show that the gap between the simple and
joint capacities is a factor 1.44 for large numbers of defectives.Comment: 14 pages, 6 figure
Asymptotics of Fingerprinting and Group Testing: Capacity-Achieving Log-Likelihood Decoders
We study the large-coalition asymptotics of fingerprinting and group testing,
and derive explicit decoders that provably achieve capacity for many of the
considered models. We do this both for simple decoders (fast but suboptimal)
and for joint decoders (slow but optimal), and both for informed and uninformed
settings.
For fingerprinting, we show that if the pirate strategy is known, the
Neyman-Pearson-based log-likelihood decoders provably achieve capacity,
regardless of the strategy. The decoder built against the interleaving attack
is further shown to be a universal decoder, able to deal with arbitrary attacks
and achieving the uninformed capacity. This universal decoder is shown to be
closely related to the Lagrange-optimized decoder of Oosterwijk et al. and the
empirical mutual information decoder of Moulin. Joint decoders are also
proposed, and we conjecture that these also achieve the corresponding joint
capacities.
For group testing, the simple decoder for the classical model is shown to be
more efficient than the one of Chan et al. and it provably achieves the simple
group testing capacity. For generalizations of this model such as noisy group
testing, the resulting simple decoders also achieve the corresponding simple
capacities.Comment: 14 pages, 2 figure
Efficient Probabilistic Group Testing Based on Traitor Tracing
Inspired by recent results from collusion-resistant traitor tracing, we
provide a framework for constructing efficient probabilistic group testing
schemes. In the traditional group testing model, our scheme asymptotically
requires T ~ 2 K ln N tests to find (with high probability) the correct set of
K defectives out of N items. The framework is also applied to several noisy
group testing and threshold group testing models, often leading to improvements
over previously known results, but we emphasize that this framework can be
applied to other variants of the classical model as well, both in adaptive and
in non-adaptive settings.Comment: 8 pages, 3 figures, 1 tabl
Interactive Consistency Algorithms Based on Voting and Error-Correding Codes
This paper presents a new class of synchronous deterministic non authenticated algorithms for reaching interactive consistency (Byzantine agreement). The algorithms are based on voting and error correcting codes and require considerably less data communication than the original algorithm, whereas the number of rounds and the number of modules meet the minimum bounds. These algorithms based on voting and coding are defined and proved on the basis of a class of algorithms, called the dispersed joined communication algorithm
Capacities and Capacity-Achieving Decoders for Various Fingerprinting Games
Combining an information-theoretic approach to fingerprinting with a more
constructive, statistical approach, we derive new results on the fingerprinting
capacities for various informed settings, as well as new log-likelihood
decoders with provable code lengths that asymptotically match these capacities.
The simple decoder built against the interleaving attack is further shown to
achieve the simple capacity for unknown attacks, and is argued to be an
improved version of the recently proposed decoder of Oosterwijk et al. With
this new universal decoder, cut-offs on the bias distribution function can
finally be dismissed.
Besides the application of these results to fingerprinting, a direct
consequence of our results to group testing is that (i) a simple decoder
asymptotically requires a factor 1.44 more tests to find defectives than a
joint decoder, and (ii) the simple decoder presented in this paper provably
achieves this bound.Comment: 13 pages, 2 figure
Tangential symmetries of Darboux integrable systems
In this paper we analyze the tangential symmetries of Darboux integrable
decomposable exterior differential systems. The decomposable systems generalize
the notion of a hyperbolic exterior differential system and include the classic
notion of Darboux integrability for first order systems and second order scalar
equations. For Darboux integrable systems the general solution can be found by
integration (solving ordinary differential equations). We show that this
property holds for our generalized systems as well.
We give a geometric construction of the Lie algebras of tangential symmetries
associated to the Darboux integrable systems. This construction has the
advantage over previous constructions that our construction does not require
the use of adapted coordinates and works for arbitrary dimension of the
underlying manifold. In particular it works for the prolongations of
decomposable exterior differential systems
How important is the intensive margin of labor adjustment? : discussion of "Aggregate hours worked in OECD countries : new measurement and implications for business cycles" by Lee Ohanian and Andrea Raffo
Using new quarterly data for hours worked in OECD countries, Ohanian and Raffo (2011) argue that in many OECD countries, particularly in Europe, hours per worker are quantitatively important as an intensive margin of labor adjustment, possibly because labor market frictions are higher than in the US. I argue that this conclusion is not supported by the data. Using the same data on hours worked, I Önd evidence that labor market frictions are higher in Europe than in the US, like Ohanian and Raffo, but also that these frictions seem to asect the intensive margin at least as much as the extensive margin of labor adjustment
- …