790 research outputs found

    Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices

    Full text link
    In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar m×nm\times n RIP fulfilling ±1\pm 1 matrices of order kk such that mO(k(log2n)log2klnlog2k)m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big). The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ({0,1,1}\{0,1,-1\} elements) that satisfy the RIP condition.Comment: The paper is accepted for publication in IEEE Transaction on Information Theor

    Sparse Recovery Analysis of Preconditioned Frames via Convex Optimization

    Get PDF
    Orthogonal Matching Pursuit and Basis Pursuit are popular reconstruction algorithms for recovery of sparse signals. The exact recovery property of both the methods has a relation with the coherence of the underlying redundant dictionary, i.e. a frame. A frame with low coherence provides better guarantees for exact recovery. An equivalent formulation of the associated linear system is obtained via premultiplication by a non-singular matrix. In view of bounds that guarantee sparse recovery, it is very useful to generate the preconditioner in such way that the preconditioned frame has low coherence as compared to the original. In this paper, we discuss the impact of preconditioning on sparse recovery. Further, we formulate a convex optimization problem for designing the preconditioner that yields a frame with improved coherence. In addition to reducing coherence, we focus on designing well conditioned frames and numerically study the relationship between the condition number of the preconditioner and the coherence of the new frame. Alongside theoretical justifications, we demonstrate through simulations the efficacy of the preconditioner in reducing coherence as well as recovering sparse signals.Comment: 9 pages, 5 Figure

    Achieving minimum-error discrimination of an arbitrary set of laser-light pulses

    Full text link
    Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states--the quantum states of light emitted by a laser--has immense practical importance. However, quantum mechanics imposes a fundamental limit on how well different coher- ent states can be distinguished, even with perfect detectors, and limits such discrimination to have a finite minimum probability of error. While conventional optical receivers lead to error rates well above this fundamental limit, Dolinar found an explicit receiver design involving optical feedback and photon counting that can achieve the minimum probability of error for discriminating any two given coherent states. The generalization of this construction to larger sets of coherent states has proven to be challenging, evidencing that there may be a limitation inherent to a linear-optics-based adaptive measurement strategy. In this Letter, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multi-copy quantum hypotheses (arXiv:1201.6625) and properties of coherent states. Furthermore, our construction is reusable, composable, and applicable to designing quantum-limited processing of coherent-state signals to optimize any metric of choice. As illustrative examples, we analyze the performance of discriminating a ternary alphabet, and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.Comment: 9 pages, 2 figures; v2 Minor correction

    Optimal Nested Test Plan for Combinatorial Quantitative Group Testing

    Full text link
    We consider the quantitative group testing problem where the objective is to identify defective items in a given population based on results of tests performed on subsets of the population. Under the quantitative group testing model, the result of each test reveals the number of defective items in the tested group. The minimum number of tests achievable by nested test plans was established by Aigner and Schughart in 1985 within a minimax framework. The optimal nested test plan offering this performance, however, was not obtained. In this work, we establish the optimal nested test plan in closed form. This optimal nested test plan is also order optimal among all test plans as the population size approaches infinity. Using heavy-hitter detection as a case study, we show via simulation examples orders of magnitude improvement of the group testing approach over two prevailing sampling-based approaches in detection accuracy and counter consumption. Other applications include anomaly detection and wideband spectrum sensing in cognitive radio systems

    Universal polar coding and sparse recovery

    Full text link
    This paper investigates universal polar coding schemes. In particular, a notion of ordering (called convolutional path) is introduced between probability distributions to determine when a polar compression (or communication) scheme designed for one distribution can also succeed for another one. The original polar decoding algorithm is also generalized to an algorithm allowing to learn information about the source distribution using the idea of checkers. These tools are used to construct a universal compression algorithm for binary sources, operating at the lowest achievable rate (entropy), with low complexity and with guaranteed small error probability. In a second part of the paper, the problem of sketching high dimensional discrete signals which are sparse is approached via the polarization technique. It is shown that the number of measurements required for perfect recovery is competitive with the O(klog(n/k))O(k \log (n/k)) bound (with optimal constant for binary signals), meanwhile affording a deterministic low complexity measurement matrix

    Insense: Incoherent Sensor Selection for Sparse Signals

    Get PDF
    Sensor selection refers to the problem of intelligently selecting a small subset of a collection of available sensors to reduce the sensing cost while preserving signal acquisition performance. The majority of sensor selection algorithms find the subset of sensors that best recovers an arbitrary signal from a number of linear measurements that is larger than the dimension of the signal. In this paper, we develop a new sensor selection algorithm for sparse (or near sparse) signals that finds a subset of sensors that best recovers such signals from a number of measurements that is much smaller than the dimension of the signal. Existing sensor selection algorithms cannot be applied in such situations. Our proposed Incoherent Sensor Selection (Insense) algorithm minimizes a coherence-based cost function that is adapted from recent results in sparse recovery theory. Using six datasets, including two real-world datasets on microbial diagnostics and structural health monitoring, we demonstrate the superior performance of Insense for sparse-signal sensor selection

    Metodi Matriciali per l'Acquisizione Efficiente e la Crittografia di Segnali in Forma Compressa

    Get PDF
    The idea of balancing the resources spent in the acquisition and encoding of natural signals strictly to their intrinsic information content has interested nearly a decade of research under the name of compressed sensing. In this doctoral dissertation we develop some extensions and improvements upon this technique's foundations, by modifying the random sensing matrices on which the signals of interest are projected to achieve different objectives. Firstly, we propose two methods for the adaptation of sensing matrix ensembles to the second-order moments of natural signals. These techniques leverage the maximisation of different proxies for the quantity of information acquired by compressed sensing, and are efficiently applied in the encoding of electrocardiographic tracks with minimum-complexity digital hardware. Secondly, we focus on the possibility of using compressed sensing as a method to provide a partial, yet cryptanalysis-resistant form of encryption; in this context, we show how a random matrix generation strategy with a controlled amount of perturbations can be used to distinguish between multiple user classes with different quality of access to the encrypted information content. Finally, we explore the application of compressed sensing in the design of a multispectral imager, by implementing an optical scheme that entails a coded aperture array and Fabry-Pérot spectral filters. The signal recoveries obtained by processing real-world measurements show promising results, that leave room for an improvement of the sensing matrix calibration problem in the devised imager

    Learning to compress and search visual data in large-scale systems

    Full text link
    The problem of high-dimensional and large-scale representation of visual data is addressed from an unsupervised learning perspective. The emphasis is put on discrete representations, where the description length can be measured in bits and hence the model capacity can be controlled. The algorithmic infrastructure is developed based on the synthesis and analysis prior models whose rate-distortion properties, as well as capacity vs. sample complexity trade-offs are carefully optimized. These models are then extended to multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is further evolved as a powerful deep neural network architecture with fast and sample-efficient training and discrete representations. For the developed algorithms, three important applications are developed. First, the problem of large-scale similarity search in retrieval systems is addressed, where a double-stage solution is proposed leading to faster query times and shorter database storage. Second, the problem of learned image compression is targeted, where the proposed models can capture more redundancies from the training images than the conventional compression codecs. Finally, the proposed algorithms are used to solve ill-posed inverse problems. In particular, the problems of image denoising and compressive sensing are addressed with promising results.Comment: PhD thesis dissertatio
    corecore