271 research outputs found
Dimension reduction by random hyperplane tessellations
Given a subset K of the unit Euclidean sphere, we estimate the minimal number
m = m(K) of hyperplanes that generate a uniform tessellation of K, in the sense
that the fraction of the hyperplanes separating any pair x, y in K is nearly
proportional to the Euclidean distance between x and y. Random hyperplanes
prove to be almost ideal for this problem; they achieve the almost optimal
bound m = O(w(K)^2) where w(K) is the Gaussian mean width of K. Using the map
that sends x in K to the sign vector with respect to the hyperplanes, we
conclude that every bounded subset K of R^n embeds into the Hamming cube {-1,
1}^m with a small distortion in the Gromov-Haussdorf metric. Since for many
sets K one has m = m(K) << n, this yields a new discrete mechanism of dimension
reduction for sets in Euclidean spaces.Comment: 17 pages, 3 figures, minor update
The stochastic geometry of unconstrained one-bit data compression
A stationary stochastic geometric model is proposed for analyzing the data
compression method used in one-bit compressed sensing. The data set is an
unconstrained stationary set, for instance all of or a
stationary Poisson point process in . It is compressed using a
stationary and isotropic Poisson hyperplane tessellation, assumed independent
of the data. That is, each data point is compressed using one bit with respect
to each hyperplane, which is the side of the hyperplane it lies on. This model
allows one to determine how the intensity of the hyperplanes must scale with
the dimension to ensure sufficient separation of different data by the
hyperplanes as well as sufficient proximity of the data compressed together.
The results have direct implications in compressive sensing and in source
coding.Comment: 29 page
Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach
This paper develops theoretical results regarding noisy 1-bit compressed
sensing and sparse binomial regression. We show that a single convex program
gives an accurate estimate of the signal, or coefficient vector, for both of
these models. We demonstrate that an s-sparse signal in R^n can be accurately
estimated from m = O(slog(n/s)) single-bit measurements using a simple convex
program. This remains true even if each measurement bit is flipped with
probability nearly 1/2. Worst-case (adversarial) noise can also be accounted
for, and uniform results that hold for all sparse inputs are derived as well.
In the terminology of sparse logistic regression, we show that O(slog(n/s))
Bernoulli trials are sufficient to estimate a coefficient vector in R^n which
is approximately s-sparse. Moreover, the same convex program works for
virtually all generalized linear models, in which the link function may be
unknown. To our knowledge, these are the first results that tie together the
theory of sparse logistic regression to 1-bit compressed sensing. Our results
apply to general signal structures aside from sparsity; one only needs to know
the size of the set K where signals reside. The size is given by the mean width
of K, a computable quantity whose square serves as a robust extension of the
dimension.Comment: 25 pages, 1 figure, error fixed in Lemma 4.
Golden codes: quantum LDPC codes built from regular tessellations of hyperbolic 4-manifolds
We adapt a construction of Guth and Lubotzky [arXiv:1310.5555] to obtain a
family of quantum LDPC codes with non-vanishing rate and minimum distance
scaling like where is the number of physical qubits. Similarly as
in [arXiv:1310.5555], our homological code family stems from hyperbolic
4-manifolds equipped with tessellations. The main novelty of this work is that
we consider a regular tessellation consisting of hypercubes. We exploit this
strong local structure to design and analyze an efficient decoding algorithm.Comment: 30 pages, 4 figure
Extremes for the inradius in the Poisson line tessellation
A Poisson line tessellation is observed within a window. With each cell of
the tessellation, we associate the inradius, which is the radius of the largest
ball contained in the cell. Using Poisson approximation, we compute the limit
distributions of the largest and smallest order statistics for the inradii of
all cells whose nuclei are contained in the window in the limit as the window
is scaled to infinity. We additionally prove that the limit shape of the cells
minimising the inradius is a triangle
- …