1,224 research outputs found

    Some constructions of superimposed codes in Euclidean spaces

    Get PDF
    AbstractWe describe three new methods for obtaining superimposed codes in Euclidean spaces. With help of them we construct codes with parameters improving upon known constructions. We also prove that the spherical simplex code is not optimal as superimposed code at least for dimensions greater than 9

    Two-dimensional patterns with distinct differences; constructions, bounds, and maximal anticodes

    Get PDF
    A two-dimensional (2-D) grid with dots is called a configuration with distinct differences if any two lines which connect two dots are distinct either in their length or in their slope. These configurations are known to have many applications such as radar, sonar, physical alignment, and time-position synchronization. Rather than restricting dots to lie in a square or rectangle, as previously studied, we restrict the maximum distance between dots of the configuration; the motivation for this is a new application of such configurations to key distribution in wireless sensor networks. We consider configurations in the hexagonal grid as well as in the traditional square grid, with distances measured both in the Euclidean metric, and in the Manhattan or hexagonal metrics. We note that these configurations are confined inside maximal anticodes in the corresponding grid. We classify maximal anticodes for each diameter in each grid. We present upper bounds on the number of dots in a pattern with distinct differences contained in these maximal anticodes. Our bounds settle (in the negative) a question of Golomb and Taylor on the existence of honeycomb arrays of arbitrarily large size. We present constructions and lower bounds on the number of dots in configurations with distinct differences contained in various 2-D shapes (such as anticodes) by considering periodic configurations with distinct differences in the square grid

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Full text link
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including l1/l1 and linf/l1 sparse recovery problems (the latter also being known as l1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix A in R^{m x n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: * A proof that linf/l1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(eps^{-2}*min{log n, (log n / log(1/eps))^2}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson-Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. * A new lower bound for the number of linear measurements required to solve l1/l1 sparse recovery. We show Omega(k/eps^2 + klog(n/k)/eps) measurements are required to recover an x' with |x - x'|_1 <= (1+eps)|x_{tail(k)}|_1, where x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. * A tight bound of m = Theta(eps^{-2}log(eps^2 n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover |x|_2 +/- eps|x|_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of l1/l1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems

    On Deterministic Sketching and Streaming for Sparse Recovery and Norm Estimation

    Get PDF
    We study classic streaming and sparse recovery problems using deterministic linear sketches, including ℓ1/ℓ1\ell_1/\ell_1 and ℓ∞/ℓ1\ell_{\infty}/\ell_1 sparse recovery problems (the latter also being known as ℓ1ℓ1-heavy hitters), norm estimation, and approximate inner product. We focus on devising a fixed matrix AϵRm×nA \epsilon \mathbb{R}^{m \times n} and a deterministic recovery/estimation procedure which work for all possible input vectors simultaneously. Our results improve upon existing work, the following being our main contributions: • A proof that ℓ∞/ℓ1\ell_{\infty}/\ell_1 sparse recovery and inner product estimation are equivalent, and that incoherent matrices can be used to solve both problems. Our upper bound for the number of measurements is m=O(ε−2min{logn,(logn/log(1/ε))2})m=O(\varepsilon^{-2}min\{log n,(log n/log(1/\varepsilon))^2\}). We can also obtain fast sketching and recovery algorithms by making use of the Fast Johnson–Lindenstrauss transform. Both our running times and number of measurements improve upon previous work. We can also obtain better error guarantees than previous work in terms of a smaller tail of the input vector. • A new lower bound for the number of linear measurements required to solve ℓ1/ℓ1\ell_1/\ell_1 sparse recovery. We show Ω(k/ε2+klog(n/k)/ε)\Omega(k/\varepsilon^2+k log(n/k)/\varepsilon) measurements are required to recover an x′ with ‖x−x′‖1≤(1+ε)‖xtail(k)‖1‖x-x′‖_1\leq(1+\varepsilon)‖x_{tail(k)}‖_1, where xtail(k)x_{tail(k)} is x projected onto all but its largest k coordinates in magnitude. • A tight bound of m=θ(ε−2log(ε2n))m=\theta(\varepsilon^{-2}log(\varepsilon^2n)) on the number of measurements required to solve deterministic norm estimation, i.e., to recover ‖x‖2±ε‖x‖1‖x‖_2\pm\varepsilon‖x‖_1. For all the problems we study, tight bounds are already known for the randomized complexity from previous work, except in the case of ℓ1/ℓ1\ell_1/\ell_1 sparse recovery, where a nearly tight bound is known. Our work thus aims to study the deterministic complexities of these problems. We remark that some of the matrices used in our algorithms, although known to exist, currently are not yet explicit in the sense that deterministic polynomial time constructions are not yet known, although in all cases polynomial time Monte Carlo algorithms are known.Engineering and Applied Science

    Sparse Recovery of Positive Signals with Minimal Expansion

    Get PDF
    We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than ℓ1\ell_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for ℓ1\ell_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to ℓ1\ell_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than ℓ1\ell_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.Comment: 25 pages, submitted for publicatio
    • …
    corecore