5,999 research outputs found

    Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements

    Get PDF
    This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometry-based correlation model in order to describe the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem to estimate the corresponding features in correlated images given by quantized linear measurements. The estimated features have to comply with the compressed information and to represent consistent transformation between images. The correlation model is given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that estimates the compressed images such that they stay consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multi-view datasets. In addition, the proposed algorithm provides effective decoding performance that compares advantageously to independent coding solutions as well as state-of-the-art distributed coding schemes based on disparity learning

    One-bit compressive sensing with norm estimation

    Full text link
    Consider the recovery of an unknown signal x{x} from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that x{x} is sparse, and that the measurements are of the form sign(ai,x){±1}\operatorname{sign}(\langle {a}_i, {x} \rangle) \in \{\pm1\}. Since such measurements give no information on the norm of x{x}, recovery methods from such measurements typically assume that x2=1\| {x} \|_2=1. We show that if one allows more generally for quantized affine measurements of the form sign(ai,x+bi)\operatorname{sign}(\langle {a}_i, {x} \rangle + b_i), and if the vectors ai{a}_i are random, an appropriate choice of the affine shifts bib_i allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. Additionally, we show that for arbitrary fixed x{x} in the annulus rx2Rr \leq \| {x} \|_2 \leq R, one may estimate the norm x2\| {x} \|_2 up to additive error δ\delta from mR4r2δ2m \gtrsim R^4 r^{-2} \delta^{-2} such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors, in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors x{x} within a Euclidean ball of known radius.Comment: 20 pages, 2 figure

    Robust 1-Bit Compressed Sensing via Hinge Loss Minimization

    Full text link
    This work theoretically studies the problem of estimating a structured high-dimensional signal x0Rnx_0 \in \mathbb{R}^n from noisy 11-bit Gaussian measurements. Our recovery approach is based on a simple convex program which uses the hinge loss function as data fidelity term. While such a risk minimization strategy is very natural to learn binary output models, such as in classification, its capacity to estimate a specific signal vector is largely unexplored. A major difficulty is that the hinge loss is just piecewise linear, so that its "curvature energy" is concentrated in a single point. This is substantially different from other popular loss functions considered in signal estimation, e.g., the square or logistic loss, which are at least locally strongly convex. It is therefore somewhat unexpected that we can still prove very similar types of recovery guarantees for the hinge loss estimator, even in the presence of strong noise. More specifically, our non-asymptotic error bounds show that stable and robust reconstruction of x0x_0 can be achieved with the optimal oversampling rate O(m1/2)O(m^{-1/2}) in terms of the number of measurements mm. Moreover, we permit a wide class of structural assumptions on the ground truth signal, in the sense that x0x_0 can belong to an arbitrary bounded convex set KRnK \subset \mathbb{R}^n. The proofs of our main results rely on some recent advances in statistical learning theory due to Mendelson. In particular, we invoke an adapted version of Mendelson's small ball method that allows us to establish a quadratic lower bound on the error of the first order Taylor approximation of the empirical hinge loss function
    corecore