5,759 research outputs found

    Consistent Basis Pursuit for Signal and Matrix Estimates in Quantized Compressed Sensing

    Get PDF
    This paper focuses on the estimation of low-complexity signals when they are observed through MM uniformly quantized compressive observations. Among such signals, we consider 1-D sparse vectors, low-rank matrices, or compressible signals that are well approximated by one of these two models. In this context, we prove the estimation efficiency of a variant of Basis Pursuit Denoise, called Consistent Basis Pursuit (CoBP), enforcing consistency between the observations and the re-observed estimate, while promoting its low-complexity nature. We show that the reconstruction error of CoBP decays like M−1/4M^{-1/4} when all parameters but MM are fixed. Our proof is connected to recent bounds on the proximity of vectors or matrices when (i) those belong to a set of small intrinsic "dimension", as measured by the Gaussian mean width, and (ii) they share the same quantized (dithered) random projections. By solving CoBP with a proximal algorithm, we provide some extensive numerical observations that confirm the theoretical bound as MM is increased, displaying even faster error decay than predicted. The same phenomenon is observed in the special, yet important case of 1-bit CS.Comment: Keywords: Quantized compressed sensing, quantization, consistency, error decay, low-rank, sparsity. 10 pages, 3 figures. Note abbout this version: title change, typo corrections, clarification of the context, adding a comparison with BPD

    Fixed-point Factorized Networks

    Full text link
    In recent years, Deep Neural Networks (DNN) based methods have achieved remarkable performance in a wide range of tasks and have been among the most powerful and widely used techniques in computer vision. However, DNN-based methods are both computational-intensive and resource-consuming, which hinders the application of these methods on embedded systems like smart phones. To alleviate this problem, we introduce a novel Fixed-point Factorized Networks (FFN) for pretrained models to reduce the computational complexity as well as the storage requirement of networks. The resulting networks have only weights of -1, 0 and 1, which significantly eliminates the most resource-consuming multiply-accumulate operations (MACs). Extensive experiments on large-scale ImageNet classification task show the proposed FFN only requires one-thousandth of multiply operations with comparable accuracy

    Time for dithering: fast and quantized random embeddings via the restricted isometry property

    Full text link
    Recently, many works have focused on the characterization of non-linear dimensionality reduction methods obtained by quantizing linear embeddings, e.g., to reach fast processing time, efficient data compression procedures, novel geometry-preserving embeddings or to estimate the information/bits stored in this reduced data representation. In this work, we prove that many linear maps known to respect the restricted isometry property (RIP) can induce a quantized random embedding with controllable multiplicative and additive distortions with respect to the pairwise distances of the data points beings considered. In other words, linear matrices having fast matrix-vector multiplication algorithms (e.g., based on partial Fourier ensembles or on the adjacency matrix of unbalanced expanders) can be readily used in the definition of fast quantized embeddings with small distortions. This implication is made possible by applying right after the linear map an additive and random "dither" that stabilizes the impact of the uniform scalar quantization operator applied afterwards. For different categories of RIP matrices, i.e., for different linear embeddings of a metric space (K⊂Rn,ℓq)(\mathcal K \subset \mathbb R^n, \ell_q) in (Rm,ℓp)(\mathbb R^m, \ell_p) with p,q≥1p,q \geq 1, we derive upper bounds on the additive distortion induced by quantization, showing that it decays either when the embedding dimension mm increases or when the distance of a pair of embedded vectors in K\mathcal K decreases. Finally, we develop a novel "bi-dithered" quantization scheme, which allows for a reduced distortion that decreases when the embedding dimension grows and independently of the considered pair of vectors.Comment: Keywords: random projections, non-linear embeddings, quantization, dither, restricted isometry property, dimensionality reduction, compressive sensing, low-complexity signal models, fast and structured sensing matrices, quantized rank-one projections (31 pages
    • …
    corecore