32 research outputs found
A Generalized LDPC Framework for Robust and Sublinear Compressive Sensing
Compressive sensing aims to recover a high-dimensional sparse signal from a
relatively small number of measurements. In this paper, a novel design of the
measurement matrix is proposed. The design is inspired by the construction of
generalized low-density parity-check codes, where the capacity-achieving
point-to-point codes serve as subcodes to robustly estimate the signal support.
In the case that each entry of the -dimensional -sparse signal lies in a
known discrete alphabet, the proposed scheme requires only
measurements and arithmetic operations. In the case of arbitrary, possibly
continuous alphabet, an error propagation graph is proposed to characterize the
residual estimation error. With measurements and computational
complexity, the reconstruction error can be made arbitrarily small with high
probability.Comment: accepted to ICASSP 201
Sparse OFDM: A Compressive Sensing Approach to Asynchronous Neighbor Discovery
A novel low-complexity wireless neighbor discovery scheme, referred to as
sparse orthogonal frequency division multiplexing (sparse-OFDM) is proposed.
One area of application is the "Internet of Things" (IoT). The number of
devices is very large while every device accesses the network with a small
probability, so the number of active devices in a frame is much smaller than
the total local device population. Sparse OFDM is a one-shot transmission
scheme with low complexity, which exploits both the parallel channel access
offered by OFDM and the bursty nature of transmissions. When the transmission
delay of each device is an integer number of symbol intervals, analysis and
simulation show that sparse OFDM enables successful asynchronous neighbor
discovery using a much smaller code length than random access schemes
State of the Art and Prospects of Structured Sensing Matrices in Compressed Sensing
Compressed sensing (CS) enables people to acquire the compressed measurements
directly and recover sparse or compressible signals faithfully even when the
sampling rate is much lower than the Nyquist rate. However, the pure random
sensing matrices usually require huge memory for storage and high computational
cost for signal reconstruction. Many structured sensing matrices have been
proposed recently to simplify the sensing scheme and the hardware
implementation in practice. Based on the restricted isometry property and
coherence, couples of existing structured sensing matrices are reviewed in this
paper, which have special structures, high recovery performance, and many
advantages such as the simple construction, fast calculation and easy hardware
implementation. The number of measurements and the universality of different
structure matrices are compared
Deterministic Sparse Fourier Transform with an ell_infty Guarantee
In this paper we revisit the deterministic version of the Sparse Fourier
Transform problem, which asks to read only a few entries of and design a recovery algorithm such that the output of the
algorithm approximates , the Discrete Fourier Transform (DFT) of .
The randomized case has been well-understood, while the main work in the
deterministic case is that of Merhi et al.\@ (J Fourier Anal Appl 2018), which
obtains samples and a similar runtime
with the guarantee. We focus on the stronger
guarantee and the closely related problem of incoherent
matrices. We list our contributions as follows.
1. We find a deterministic collection of samples for the
recovery in time , and a deterministic
collection of samples for the sparse
recovery in time .
2. We give new deterministic constructions of incoherent matrices that are
row-sampled submatrices of the DFT matrix, via a derandomization of Bernstein's
inequality and bounds on exponential sums considered in analytic number theory.
Our first construction matches a previous randomized construction of Nelson,
Nguyen and Woodruff (RANDOM'12), where there was no constraint on the form of
the incoherent matrix.
Our algorithms are nearly sample-optimal, since a lower bound of is known, even for the case where the sensing matrix can be
arbitrarily designed. A similar lower bound of is
known for incoherent matrices.Comment: ICALP 2020--presentation improved according to reviewers' comment
SPRIGHT: A Fast and Robust Framework for Sparse Walsh-Hadamard Transform
We consider the problem of computing the Walsh-Hadamard Transform (WHT) of
some -length input vector in the presence of noise, where the -point
Walsh spectrum is -sparse with scaling sub-linearly in
the input dimension for some . Over the past decade, there has
been a resurgence in research related to the computation of Discrete Fourier
Transform (DFT) for some length- input signal that has a -sparse Fourier
spectrum. In particular, through a sparse-graph code design, our earlier work
on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes
the -sparse DFT in time by taking noiseless samples.
Inspired by the coding-theoretic design framework, Scheibler et al. proposed
the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly
computes the -sparse WHT in the absence of noise using
samples in time . However, the SparseFHT algorithm explicitly
exploits the noiseless nature of the problem, and is not equipped to deal with
scenarios where the observations are corrupted by noise. Therefore, a question
of critical interest is whether this coding-theoretic framework can be made
robust to noise. Further, if the answer is yes, what is the extra price that
needs to be paid for being robust to noise? In this paper, we show, quite
interestingly, that there is {\it no extra price} that needs to be paid for
being robust to noise other than a constant factor. In other words, we can
maintain the same sample complexity and the computational
complexity as those of the noiseless case, using our SParse
Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm.Comment: Part of our results was reported in ISIT 2014, titled "The SPRIGHT
algorithm for robust sparse Hadamard Transforms.
Dimension-independent Sparse Fourier Transform
The Discrete Fourier Transform (DFT) is a fundamental computational
primitive, and the fastest known algorithm for computing the DFT is the FFT
(Fast Fourier Transform) algorithm. One remarkable feature of FFT is the fact
that its runtime depends only on the size of the input vector, but not on
the dimensionality of the input domain: FFT runs in time
irrespective of whether the DFT in question is on or
for some , where .
The state of the art for Sparse FFT, i.e. the problem of computing the DFT of
a signal that has at most nonzeros in Fourier domain, is very different:
all current techniques for sublinear time computation of Sparse FFT incur an
exponential dependence on the dimension in the runtime. In this paper we
give the first algorithm that computes the DFT of a -sparse signal in time
in any dimension , avoiding the curse of
dimensionality inherent in all previously known techniques. Our main tool is a
new class of filters that we refer to as adaptive aliasing filters: these
filters allow isolating frequencies of a -Fourier sparse signal using
samples in time domain and runtime per frequency, in any dimension
.
We also investigate natural average case models of the input signal: (1)
worst case support in Fourier domain with randomized coefficients and (2)
random locations in Fourier domain with worst case coefficients. Our techniques
lead to an time algorithm for the former and an time algorithm for the latter
Walsh-Hadamard Transform and Cryptographic Applications in Bias Computing
Walsh-Hadamard transform is used in a wide variety of scientific and engineering applications, including bent functions and cryptanalytic optimization techniques in cryptography. In linear cryptanalysis, it is a key question to find a good linear approximation, which holds with probability and the bias is large in absolute value. Lu and Desmedt (2011) take a step toward answering this key question in a more generalized setting and initiate the work on the generalized bias problem with linearly-dependent inputs. In this paper, we give fully extended results. Deep insights on assumptions behind the problem are given. We take an information-theoretic approach to show that our bias problem assumes the setting of the maximum input entropy subject to the input constraint. By means of Walsh transform, the bias can be expressed in a simple form. It incorporates Piling-up lemma as a special case. Secondly, as application, we answer a long-standing open problem in correlation attacks on combiners with memory. We give a closed-form exact solution for the correlation involving the multiple polynomial of any weight \emph{for the first time}. We also give Walsh analysis for numerical approximation. An interesting bias phenomenon is uncovered, i.e., for even and odd weight of the polynomial, the correlation behaves differently. Thirdly, we introduce the notion of weakly biased distribution, and study bias approximation for a more general case by Walsh analysis. We show that for weakly biased distribution, Piling-up lemma is still valid. Our work shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems
Computing a k-sparse n-length Discrete Fourier Transform using at most 4k samples and O(k log k) complexity
Given an -length input signal \mbf{x}, it is well known that its
Discrete Fourier Transform (DFT), \mbf{X}, can be computed in
complexity using a Fast Fourier Transform (FFT). If the spectrum \mbf{X} is
exactly -sparse (where ), can we do better? We show that
asymptotically in and , when is sub-linear in (precisely, where ), and the support of the non-zero DFT
coefficients is uniformly random, we can exploit this sparsity in two
fundamental ways (i) {\bf {sample complexity}}: we need only
deterministically chosen samples of the input signal \mbf{x} (where
when ); and (ii) {\bf {computational complexity}}: we can
reliably compute the DFT \mbf{X} using operations, where the
constants in the big Oh are small and are related to the constants involved in
computing a small number of DFTs of length approximately equal to the sparsity
parameter . Our algorithm succeeds with high probability, with the
probability of failure vanishing to zero asymptotically in the number of
samples acquired, .Comment: 36 pages, 15 figures. To be presented at ISIT-2013, Istanbul Turke
Nearly Optimal Sparse Fourier Transform
We consider the problem of computing the k-sparse approximation to the
discrete Fourier transform of an n-dimensional signal. We show:
* An O(k log n)-time randomized algorithm for the case where the input signal
has at most k non-zero Fourier coefficients, and
* An O(k log n log(n/k))-time randomized algorithm for general input signals.
Both algorithms achieve o(n log n) time, and thus improve over the Fast
Fourier Transform, for any k = o(n). They are the first known algorithms that
satisfy this property. Also, if one assumes that the Fast Fourier Transform is
optimal, the algorithm for the exactly k-sparse case is optimal for any k =
n^{\Omega(1)}.
We complement our algorithmic results by showing that any algorithm for
computing the sparse Fourier transform of a general signal must use at least
\Omega(k log(n/k)/ log log n) signal samples, even if it is allowed to perform
adaptive sampling.Comment: 28 pages, appearing at STOC 201
Powerset Convolutional Neural Networks
We present a novel class of convolutional neural networks (CNNs) for set
functions, i.e., data indexed with the powerset of a finite set. The
convolutions are derived as linear, shift-equivariant functions for various
notions of shifts on set functions. The framework is fundamentally different
from graph convolutions based on the Laplacian, as it provides not one but
several basic shifts, one for each element in the ground set. Prototypical
experiments with several set function classification tasks on synthetic
datasets and on datasets derived from real-world hypergraphs demonstrate the
potential of our new powerset CNNs.Comment: Advances in Neural Information Processing Systems 3