32 research outputs found

    A Generalized LDPC Framework for Robust and Sublinear Compressive Sensing

    Full text link
    Compressive sensing aims to recover a high-dimensional sparse signal from a relatively small number of measurements. In this paper, a novel design of the measurement matrix is proposed. The design is inspired by the construction of generalized low-density parity-check codes, where the capacity-achieving point-to-point codes serve as subcodes to robustly estimate the signal support. In the case that each entry of the nn-dimensional kk-sparse signal lies in a known discrete alphabet, the proposed scheme requires only O(klogn)O(k \log n) measurements and arithmetic operations. In the case of arbitrary, possibly continuous alphabet, an error propagation graph is proposed to characterize the residual estimation error. With O(klog2n)O(k \log^2 n) measurements and computational complexity, the reconstruction error can be made arbitrarily small with high probability.Comment: accepted to ICASSP 201

    Sparse OFDM: A Compressive Sensing Approach to Asynchronous Neighbor Discovery

    Full text link
    A novel low-complexity wireless neighbor discovery scheme, referred to as sparse orthogonal frequency division multiplexing (sparse-OFDM) is proposed. One area of application is the "Internet of Things" (IoT). The number of devices is very large while every device accesses the network with a small probability, so the number of active devices in a frame is much smaller than the total local device population. Sparse OFDM is a one-shot transmission scheme with low complexity, which exploits both the parallel channel access offered by OFDM and the bursty nature of transmissions. When the transmission delay of each device is an integer number of symbol intervals, analysis and simulation show that sparse OFDM enables successful asynchronous neighbor discovery using a much smaller code length than random access schemes

    State of the Art and Prospects of Structured Sensing Matrices in Compressed Sensing

    Full text link
    Compressed sensing (CS) enables people to acquire the compressed measurements directly and recover sparse or compressible signals faithfully even when the sampling rate is much lower than the Nyquist rate. However, the pure random sensing matrices usually require huge memory for storage and high computational cost for signal reconstruction. Many structured sensing matrices have been proposed recently to simplify the sensing scheme and the hardware implementation in practice. Based on the restricted isometry property and coherence, couples of existing structured sensing matrices are reviewed in this paper, which have special structures, high recovery performance, and many advantages such as the simple construction, fast calculation and easy hardware implementation. The number of measurements and the universality of different structure matrices are compared

    Deterministic Sparse Fourier Transform with an ell_infty Guarantee

    Full text link
    In this paper we revisit the deterministic version of the Sparse Fourier Transform problem, which asks to read only a few entries of xCnx \in \mathbb{C}^n and design a recovery algorithm such that the output of the algorithm approximates x^\hat x, the Discrete Fourier Transform (DFT) of xx. The randomized case has been well-understood, while the main work in the deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which obtains O(k2log1klog5.5n)O(k^2 \log^{-1}k \cdot \log^{5.5}n) samples and a similar runtime with the 2/1\ell_2/\ell_1 guarantee. We focus on the stronger /1\ell_{\infty}/\ell_1 guarantee and the closely related problem of incoherent matrices. We list our contributions as follows. 1. We find a deterministic collection of O(k2logn)O(k^2 \log n) samples for the /1\ell_\infty/\ell_1 recovery in time O(nklog2n)O(nk \log^2 n), and a deterministic collection of O(k2log2n)O(k^2 \log^2 n) samples for the /1\ell_\infty/\ell_1 sparse recovery in time O(k2log3n)O(k^2 \log^3n). 2. We give new deterministic constructions of incoherent matrices that are row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's inequality and bounds on exponential sums considered in analytic number theory. Our first construction matches a previous randomized construction of Nelson, Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of the incoherent matrix. Our algorithms are nearly sample-optimal, since a lower bound of Ω(k2+klogn)\Omega(k^2 + k \log n) is known, even for the case where the sensing matrix can be arbitrarily designed. A similar lower bound of Ω(k2logn/logk)\Omega(k^2 \log n/ \log k) is known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment

    SPRIGHT: A Fast and Robust Framework for Sparse Walsh-Hadamard Transform

    Full text link
    We consider the problem of computing the Walsh-Hadamard Transform (WHT) of some NN-length input vector in the presence of noise, where the NN-point Walsh spectrum is KK-sparse with K=O(Nδ)K = {O}(N^{\delta}) scaling sub-linearly in the input dimension NN for some 0<δ<10<\delta<1. Over the past decade, there has been a resurgence in research related to the computation of Discrete Fourier Transform (DFT) for some length-NN input signal that has a KK-sparse Fourier spectrum. In particular, through a sparse-graph code design, our earlier work on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes the KK-sparse DFT in time O(KlogK){O}(K\log K) by taking O(K){O}(K) noiseless samples. Inspired by the coding-theoretic design framework, Scheibler et al. proposed the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly computes the KK-sparse WHT in the absence of noise using O(KlogN){O}(K\log N) samples in time O(Klog2N){O}(K\log^2 N). However, the SparseFHT algorithm explicitly exploits the noiseless nature of the problem, and is not equipped to deal with scenarios where the observations are corrupted by noise. Therefore, a question of critical interest is whether this coding-theoretic framework can be made robust to noise. Further, if the answer is yes, what is the extra price that needs to be paid for being robust to noise? In this paper, we show, quite interestingly, that there is {\it no extra price} that needs to be paid for being robust to noise other than a constant factor. In other words, we can maintain the same sample complexity O(KlogN){O}(K\log N) and the computational complexity O(Klog2N){O}(K\log^2 N) as those of the noiseless case, using our SParse Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm.Comment: Part of our results was reported in ISIT 2014, titled "The SPRIGHT algorithm for robust sparse Hadamard Transforms.

    Dimension-independent Sparse Fourier Transform

    Full text link
    The Discrete Fourier Transform (DFT) is a fundamental computational primitive, and the fastest known algorithm for computing the DFT is the FFT (Fast Fourier Transform) algorithm. One remarkable feature of FFT is the fact that its runtime depends only on the size NN of the input vector, but not on the dimensionality of the input domain: FFT runs in time O(NlogN)O(N\log N) irrespective of whether the DFT in question is on ZN\mathbb{Z}_N or Znd\mathbb{Z}_n^d for some d>1d>1, where N=ndN=n^d. The state of the art for Sparse FFT, i.e. the problem of computing the DFT of a signal that has at most kk nonzeros in Fourier domain, is very different: all current techniques for sublinear time computation of Sparse FFT incur an exponential dependence on the dimension dd in the runtime. In this paper we give the first algorithm that computes the DFT of a kk-sparse signal in time poly(k,logN)\text{poly}(k, \log N) in any dimension dd, avoiding the curse of dimensionality inherent in all previously known techniques. Our main tool is a new class of filters that we refer to as adaptive aliasing filters: these filters allow isolating frequencies of a kk-Fourier sparse signal using O(k)O(k) samples in time domain and O(klogN)O(k\log N) runtime per frequency, in any dimension dd. We also investigate natural average case models of the input signal: (1) worst case support in Fourier domain with randomized coefficients and (2) random locations in Fourier domain with worst case coefficients. Our techniques lead to an O~(k2)\widetilde O(k^2) time algorithm for the former and an O~(k)\widetilde O(k) time algorithm for the latter

    Walsh-Hadamard Transform and Cryptographic Applications in Bias Computing

    Get PDF
    Walsh-Hadamard transform is used in a wide variety of scientific and engineering applications, including bent functions and cryptanalytic optimization techniques in cryptography. In linear cryptanalysis, it is a key question to find a good linear approximation, which holds with probability (1+d)/2(1+d)/2 and the bias dd is large in absolute value. Lu and Desmedt (2011) take a step toward answering this key question in a more generalized setting and initiate the work on the generalized bias problem with linearly-dependent inputs. In this paper, we give fully extended results. Deep insights on assumptions behind the problem are given. We take an information-theoretic approach to show that our bias problem assumes the setting of the maximum input entropy subject to the input constraint. By means of Walsh transform, the bias can be expressed in a simple form. It incorporates Piling-up lemma as a special case. Secondly, as application, we answer a long-standing open problem in correlation attacks on combiners with memory. We give a closed-form exact solution for the correlation involving the multiple polynomial of any weight \emph{for the first time}. We also give Walsh analysis for numerical approximation. An interesting bias phenomenon is uncovered, i.e., for even and odd weight of the polynomial, the correlation behaves differently. Thirdly, we introduce the notion of weakly biased distribution, and study bias approximation for a more general case by Walsh analysis. We show that for weakly biased distribution, Piling-up lemma is still valid. Our work shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems

    Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity

    Full text link
    Given an nn-length input signal \mbf{x}, it is well known that its Discrete Fourier Transform (DFT), \mbf{X}, can be computed in O(nlogn)O(n \log n) complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is exactly kk-sparse (where k<<nk<<n), can we do better? We show that asymptotically in kk and nn, when kk is sub-linear in nn (precisely, knδk \propto n^{\delta} where 0<δ<10 < \delta <1), and the support of the non-zero DFT coefficients is uniformly random, we can exploit this sparsity in two fundamental ways (i) {\bf {sample complexity}}: we need only M=rkM=rk deterministically chosen samples of the input signal \mbf{x} (where r<4r < 4 when 0<δ<0.990 < \delta < 0.99); and (ii) {\bf {computational complexity}}: we can reliably compute the DFT \mbf{X} using O(klogk)O(k \log k) operations, where the constants in the big Oh are small and are related to the constants involved in computing a small number of DFTs of length approximately equal to the sparsity parameter kk. Our algorithm succeeds with high probability, with the probability of failure vanishing to zero asymptotically in the number of samples acquired, MM.Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke

    Nearly Optimal Sparse Fourier Transform

    Get PDF
    We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show: * An O(k log n)-time randomized algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and * An O(k log n log(n/k))-time randomized algorithm for general input signals. Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k = o(n). They are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = n^{\Omega(1)}. We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least \Omega(k log(n/k)/ log log n) signal samples, even if it is allowed to perform adaptive sampling.Comment: 28 pages, appearing at STOC 201

    Powerset Convolutional Neural Networks

    Full text link
    We present a novel class of convolutional neural networks (CNNs) for set functions, i.e., data indexed with the powerset of a finite set. The convolutions are derived as linear, shift-equivariant functions for various notions of shifts on set functions. The framework is fundamentally different from graph convolutions based on the Laplacian, as it provides not one but several basic shifts, one for each element in the ground set. Prototypical experiments with several set function classification tasks on synthetic datasets and on datasets derived from real-world hypergraphs demonstrate the potential of our new powerset CNNs.Comment: Advances in Neural Information Processing Systems 3
    corecore