328 research outputs found

    Identifiability Scaling Laws in Bilinear Inverse Problems

    Full text link
    A number of ill-posed inverse problems in signal processing, like blind deconvolution, matrix factorization, dictionary learning and blind source separation share the common characteristic of being bilinear inverse problems (BIPs), i.e. the observation model is a function of two variables and conditioned on one variable being known, the observation is a linear function of the other variable. A key issue that arises for such inverse problems is that of identifiability, i.e. whether the observation is sufficient to unambiguously determine the pair of inputs that generated the observation. Identifiability is a key concern for applications like blind equalization in wireless communications and data mining in machine learning. Herein, a unifying and flexible approach to identifiability analysis for general conic prior constrained BIPs is presented, exploiting a connection to low-rank matrix recovery via lifting. We develop deterministic identifiability conditions on the input signals and examine their satisfiability in practice for three classes of signal distributions, viz. dependent but uncorrelated, independent Gaussian, and independent Bernoulli. In each case, scaling laws are developed that trade-off probability of robust identifiability with the complexity of the rank two null space. An added appeal of our approach is that the rank two null space can be partly or fully characterized for many bilinear problems of interest (e.g. blind deconvolution). We present numerical experiments involving variations on the blind deconvolution problem that exploit a characterization of the rank two null space and demonstrate that the scaling laws offer good estimates of identifiability.Comment: 25 pages, 5 figure

    Optimal Injectivity Conditions for Bilinear Inverse Problems with Applications to Identifiability of Deconvolution Problems

    Full text link
    We study identifiability for bilinear inverse problems under sparsity and subspace constraints. We show that, up to a global scaling ambiguity, almost all such maps are injective on the set of pairs of sparse vectors if the number of measurements mm exceeds 2(s1+s2)βˆ’22(s_1+s_2)-2, where s1s_1 and s2s_2 denote the sparsity of the two input vectors, and injective on the set of pairs of vectors lying in known subspaces of dimensions n1n_1 and n2n_2 if mβ‰₯2(n1+n2)βˆ’4m\geq 2(n_1+n_2)-4. We also prove that both these bounds are tight in the sense that one cannot have injectivity for a smaller number of measurements. Our proof technique draws from algebraic geometry. As an application we derive optimal identifiability conditions for the deconvolution problem, thus improving on recent work of Li et al. [1]

    Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints

    Full text link
    Blind deconvolution (BD), the resolution of a signal and a filter given their convolution, arises in many applications. Without further constraints, BD is ill-posed. In practice, subspace or sparsity constraints have been imposed to reduce the search space, and have shown some empirical success. However, existing theoretical analysis on uniqueness in BD is rather limited. As an effort to address the still mysterious question, we derive sufficient conditions under which two vectors can be uniquely identified from their circular convolution, subject to subspace or sparsity constraints. These sufficient conditions provide the first algebraic sample complexities for BD. We first derive a sufficient condition that applies to almost all bases or frames. For blind deconvolution of vectors in Cn\mathbb{C}^n, with two subspace constraints of dimensions m1m_1 and m2m_2, the required sample complexity is nβ‰₯m1m2n\geq m_1m_2. Then we impose a sub-band structure on one basis, and derive a sufficient condition that involves a relaxed sample complexity nβ‰₯m1+m2βˆ’1n\geq m_1+m_2-1, which we show to be optimal. We present the extensions of these results to BD with sparsity constraints or mixed constraints, with the sparsity level replacing the subspace dimension. The cost for the unknown support in this case is an extra factor of 2 in the sample complexity.Comment: 17 pages, 3 figures. Some of these results will be presented at SPARS 201

    A Unified Framework for Identifiability Analysis in Bilinear Inverse Problems with Applications to Subspace and Sparsity Models

    Full text link
    Bilinear inverse problems (BIPs), the resolution of two vectors given their image under a bilinear mapping, arise in many applications. Without further constraints, BIPs are usually ill-posed. In practice, properties of natural signals are exploited to solve BIPs. For example, subspace constraints or sparsity constraints are imposed to reduce the search space. These approaches have shown some success in practice. However, there are few results on uniqueness in BIPs. For most BIPs, the fundamental question of under what condition the problem admits a unique solution, is yet to be answered. For example, blind gain and phase calibration (BGPC) is a structured bilinear inverse problem, which arises in many applications, including inverse rendering in computational relighting (albedo estimation with unknown lighting), blind phase and gain calibration in sensor array processing, and multichannel blind deconvolution (MBD). It is interesting to study the uniqueness of such problems. In this paper, we define identifiability of a BIP up to a group of transformations. We derive necessary and sufficient conditions for such identifiability, i.e., the conditions under which the solutions can be uniquely determined up to the transformation group. Applying these results to BGPC, we derive sufficient conditions for unique recovery under several scenarios, including subspace, joint sparsity, and sparsity models. For BGPC with joint sparsity or sparsity constraints, we develop a procedure to compute the relevant transformation groups. We also give necessary conditions in the form of tight lower bounds on sample complexities, and demonstrate the tightness of these bounds by numerical experiments. The results for BGPC not only demonstrate the application of the proposed general framework for identifiability analysis, but are also of interest in their own right.Comment: 40 pages, 3 figure

    Identifiability and Stability in Blind Deconvolution under Minimal Assumptions

    Full text link
    Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the filter, BD does not admit a unique solution. In practice, subspace or sparsity assumptions have shown the ability to reduce the search space and yield the unique solution. However, existing theoretical analysis on uniqueness in BD is rather limited. In an earlier paper, we provided the first algebraic sample complexities for BD that hold for almost all bases or frames. We showed that for BD of a pair of vectors in Cn\mathbb{C}^n, with subspace constraints of dimensions m1m_1 and m2m_2, respectively, a sample complexity of nβ‰₯m1m2n\geq m_1m_2 is sufficient. This result is suboptimal, since the number of degrees of freedom is merely m1+m2βˆ’1m_1+m_2-1. We provided analogus results, with similar suboptimality, for BD with sparsity or mixed subspace and sparsity constraints. In this paper, taking advantage of the recent progress on the information-theoretic limits of unique low-rank matrix recovery, we finally bridge this gap, and derive an optimal sample complexity result for BD with generic bases or frames. We show that for BD of an arbitrary pair (resp. all pairs) of vectors in Cn\mathbb{C}^n, with sparsity constraints of sparsity levels s1s_1 and s2s_2, a sample complexity of n>s1+s2n > s_1+s_2 (resp. n>2(s1+s2)n > 2(s_1+s_2)) is sufficient. We also present analogous results for BD with subspace constraints or mixed constraints, with the subspace dimension replacing the sparsity level. Last but not least, in all the above scenarios, if the bases or frames follow a probabilistic distribution specified in the paper, the recovery is not only unique, but also stable against small perturbations in the measurements, under the same sample complexities.Comment: 32 page

    Blind Recovery of Sparse Signals from Subsampled Convolution

    Full text link
    Subsampled blind deconvolution is the recovery of two unknown signals from samples of their convolution. To overcome the ill-posedness of this problem, solutions based on priors tailored to specific application have been developed in practical applications. In particular, sparsity models have provided promising priors. However, in spite of empirical success of these methods in many applications, existing analyses are rather limited in two main ways: by disparity between the theoretical assumptions on the signal and/or measurement model versus practical setups; or by failure to provide a performance guarantee for parameter values within the optimal regime defined by the information theoretic limits. In particular, it has been shown that a naive sparsity model is not a strong enough prior for identifiability in the blind deconvolution problem. Instead, in addition to sparsity, we adopt a conic constraint, which enforces spectral flatness of the signals. Under this prior, we provide an iterative algorithm that achieves guaranteed performance in blind deconvolution at near optimal sample complexity. Numerical results show the empirical performance of the iterative algorithm agrees with the performance guarantee

    Fundamental Limits of Blind Deconvolution Part I: Ambiguity Kernel

    Full text link
    Blind deconvolution is an ubiquitous non-linear inverse problem in applications like wireless communications and image processing. This problem is generally ill-posed, and there have been efforts to use sparse models for regularizing blind deconvolution to promote signal identifiability. Part I of this two-part paper characterizes the ambiguity space of blind deconvolution and shows unidentifiability of this inverse problem for almost every pair of unconstrained input signals. The approach involves lifting the deconvolution problem to a rank one matrix recovery problem and analyzing the rank two null space of the resultant linear operator. A measure theoretically tight (parametric and recursive) representation of the key rank two null space is stated and proved. This representation is a novel foundational result for signal and code design strategies promoting identifiability under convolutive observation models. Part II of this paper analyzes the identifiability of sparsity constrained blind deconvolution and establishes surprisingly strong negative results on scaling laws for the sparsity-ambiguity trade-off.Comment: 20 pages, 4 figure

    Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication

    Full text link
    The success of the compressed sensing paradigm has shown that a substantial reduction in sampling and storage complexity can be achieved in certain linear and non-adaptive estimation problems. It is therefore an advisable strategy for noncoherent information retrieval in, for example, sporadic blind and semi-blind communication and sampling problems. But, the conventional model is not practical here since the compressible signals have to be estimated from samples taken solely on the output of an un-calibrated system which is unknown during measurement but often compressible. Conventionally, one has either to operate at suboptimal sampling rates or the recovery performance substantially suffers from the dominance of model mismatch. In this work we discuss such type of estimation problems and we focus on bilinear inverse problems. We link this problem to the recovery of low-rank and sparse matrices and establish stable low-dimensional embeddings of the uncalibrated receive signals whereby addressing also efficient communication-oriented methods like universal random demodulation. Exemplary, we investigate in more detail sparse convolutions serving as a basic communication channel model. In using some recent results from additive combinatorics we show that such type of signals can be efficiently low-rate sampled by semi-blind methods. Finally, we present a further application of these results in the field of phase retrieval from intensity Fourier measurements.Comment: Book chapter, submitted to "Compressed Sensing and its Applications", 31 pages, revised versio

    Parametric Bilinear Generalized Approximate Message Passing

    Full text link
    We propose a scheme to estimate the parameters bib_i and cjc_j of the bilinear form zm=βˆ‘i,jbizm(i,j)cjz_m=\sum_{i,j} b_i z_m^{(i,j)} c_j from noisy measurements {ym}m=1M\{y_m\}_{m=1}^M, where ymy_m and zmz_m are related through an arbitrary likelihood function and zm(i,j)z_m^{(i,j)} are known. Our scheme is based on generalized approximate message passing (G-AMP): it treats bib_i and cjc_j as random variables and zm(i,j)z_m^{(i,j)} as an i.i.d.\ Gaussian 3-way tensor in order to derive a tractable simplification of the sum-product algorithm in the large-system limit. It generalizes previous instances of bilinear G-AMP, such as those that estimate matrices B\boldsymbol{B} and C\boldsymbol{C} from a noisy measurement of Z=BC\boldsymbol{Z}=\boldsymbol{BC}, allowing the application of AMP methods to problems such as self-calibration, blind deconvolution, and matrix compressive sensing. Numerical experiments confirm the accuracy and computational efficiency of the proposed approach

    Blind Identification of Graph Filters

    Full text link
    Network processes are often represented as signals defined on the vertices of a graph. To untangle the latent structure of such signals, one can view them as outputs of linear graph filters modeling underlying network dynamics. This paper deals with the problem of joint identification of a graph filter and its input signal, thus broadening the scope of classical blind deconvolution of temporal and spatial signals to the less-structured graph domain. Given a graph signal y\mathbf{y} modeled as the output of a graph filter, the goal is to recover the vector of filter coefficients h\mathbf{h}, and the input signal x\mathbf{x} which is assumed to be sparse. While y\mathbf{y} is a bilinear function of x\mathbf{x} and h\mathbf{h}, the filtered graph signal is also a linear combination of the entries of the lifted rank-one, row-sparse matrix xhT\mathbf{x} \mathbf{h}^T. The blind graph-filter identification problem can thus be tackled via rank and sparsity minimization subject to linear constraints, an inverse problem amenable to convex relaxations offering provable recovery guarantees under simplifying assumptions. Numerical tests using both synthetic and real-world networks illustrate the merits of the proposed algorithms, as well as the benefits of leveraging multiple signals to aid the blind identification task
    • …
    corecore