129 research outputs found

    Bilinear inverse problems with sparsity: Optimal identifiability conditions and efficient recovery

    Get PDF
    Bilinear inverse problems (BIPs), the resolution of two vectors given their image under a bilinear mapping, arise in many applications. Without further constraints, BIPs are usually ill-posed. In practice, parsimonious structures of natural signals (e.g., subspace or sparsity) are exploited. However, there are few theoretical justifications for using such structures for BIPs. We consider two types of BIPs, blind deconvolution (BD) and blind gain and phase calibration (BGPC), with subspace or sparsity structures. Our contributions are twofold: we derive optimal identifiability conditions, and propose efficient algorithms that solve these problems. In previous work, we provided the first algebraic sample complexities for BD that hold for Lebesgue almost all bases or frames. We showed that for BD of a pair of vectors in \bbC^n, with subspace constraints of dimensions m1m_1 and m2m_2, respectively, a sample complexity of n≥m1m2n\geq m_1m_2 is sufficient. This result is suboptimal, since the number of degrees of freedom is merely m1+m2−1m_1+m_2-1. We provided analogous results, with similar suboptimality, for BD with sparsity or mixed subspace and sparsity constraints. In Chapter 2, taking advantage of the recent progress on the information-theoretic limits of unique low-rank matrix recovery, we finally bridge this gap, and derive an optimal sample complexity result for BD with generic bases or frames. We show that for BD of an arbitrary pair (resp. all pairs) of vectors in \bbC^n, with sparsity constraints of sparsity levels s1s_1 and s2s_2, a sample complexity of n>s1+s2n > s_1+s_2 (resp. n>2(s1+s2)n > 2(s_1+s_2)) is sufficient. We also present analogous results for BD with subspace constraints or mixed constraints, with the subspace dimension replacing the sparsity level. Last but not least, in all the above scenarios, if the bases or frames follow a probabilistic distribution specified in Chapter 2, the recovery is not only unique, but also stable against small perturbations in the measurements, under the same sample complexities. In previous work, we proposed studying the identifiability in bilinear inverse problems up to transformation groups. In particular, we studied several special cases of blind gain and phase calibration, including the cases of subspace and joint sparsity models on the signals, and gave sufficient and necessary conditions for identifiability up to certain transformation groups. However, there were gaps between the sample complexities in the sufficient conditions and the necessary conditions. In Chapter 3, under a mild assumption that the signals and models are generic, we bridge the gaps by deriving tight sufficient conditions with optimal or near optimal sample complexities. Recently there has been renewed interest in solutions to BGPC with careful analysis of error bounds. In Chapter 4, we formulate BGPC as an eigenvalue/eigenvector problem, and propose to solve it via power iteration, or in the sparsity or joint sparsity case, via truncated power iteration (which we show is equivalent to a sparsity-projected gradient descent). Under certain assumptions, the unknown gains, phases, and the unknown signal can be recovered simultaneously. Numerical experiments show that power iteration algorithms work not only in the regime predicted by our main results, but also in regimes where theoretical analysis is limited. We also show that our power iteration algorithms for BGPC compare favorably with competing algorithms in adversarial conditions, e.g., with noisy measurement or with a bad initial estimate. A problem related to BGPC is multichannel blind deconvolution (MBD) with a circular convolution model, i.e., the recovery of an unknown signal ff and multiple unknown filters xix_i from circular convolutions yi=xi⊛fy_i=x_i \circledast f (i=1,2,…,Ni=1,2,\dots,N). In Chapter 5, we consider the case where the xix_i's are sparse, and convolution with ff is invertible. Our nonconvex optimization formulation solves for a filter hh on the unit sphere that produces sparse outputs yi⊛hy_i\circledast h. Under some technical assumptions, we show that all local minima of the objective function correspond to the inverse filter of ff up to an inherent sign and shift ambiguity, and all saddle points have strictly negative curvatures. This geometric structure allows successful recovery of ff and xix_i using a simple manifold gradient descent algorithm with random initialization. Our theoretical findings are complemented by numerical experiments, which demonstrate superior performance of the proposed approach over the previous methods

    Manifold Gradient Descent Solves Multi-Channel Sparse Blind Deconvolution Provably and Efficiently

    Full text link
    Multi-channel sparse blind deconvolution, or convolutional sparse coding, refers to the problem of learning an unknown filter by observing its circulant convolutions with multiple input signals that are sparse. This problem finds numerous applications in signal processing, computer vision, and inverse problems. However, it is challenging to learn the filter efficiently due to the bilinear structure of the observations with the respect to the unknown filter and inputs, as well as the sparsity constraint. In this paper, we propose a novel approach based on nonconvex optimization over the sphere manifold by minimizing a smooth surrogate of the sparsity-promoting loss function. It is demonstrated that manifold gradient descent with random initializations will provably recover the filter, up to scaling and shift ambiguity, as soon as the number of observations is sufficiently large under an appropriate random data model. Numerical experiments are provided to illustrate the performance of the proposed method with comparisons to existing ones.Comment: accepted by IEEE Transactions on Information Theor

    Signal eigen-analysis and L1 inversion of seismic data

    No full text
    This thesis covers seismic signal analysis and inversion. It can be divided into two parts. The first part includes principal component analysis (PCA) and singular spectrum analysis (SSA). The objectives of these two eigen-analyses are extracting weak signals and designing optimal spatial sampling interval. The other part is on least squares inverse problems with a L1 norm constraint. The study covers seismic reflectivity inversion in which L1 regularization provides us a sparse solution of reflectivity series, and seismic reverse time migration in which L1 regularization generates high-resolution images. PCA is a well-known eigenvector-based multivariate analysis technique which decomposes a data set into principal components, in order to maximize the information content in the recorded data with fewer dimensions. PCA can be described from two viewpoints, one of which is derived by maximizing the variance of the principal components, and the other draws a connection between the representation of data variance and the representation of data themself by using Singular Value Decomposition (SVD). Each approach has a unique motivation, and thus comparison of these two approaches provides further understanding of the PCA theory. While dominant components contain primary energy of the original seismic data, remaining may be used to reconstruct weak signals, which reflect the geometrical properties of fractures, pores and fluid properties in the reservoirs. When PCA is conducted on time-domain data, Singular Spectrum Analysis (SSA) technology is applied to frequency-domain data, to analyse signal characters related to spatial sampling. For a given frequency, this technique transforms the spatial acquisition data into a Hankel matrix. Ideally, the rank of this matrix is the total number of plane waves within the selected spatial window. However, the existence of noise and absence of seismic traces may increase the rank of Hankel matrix. Thus deflation could be an effective way for noise attenuation and trace exploration. In this thesis, SSA is conducted on seismic data, to find an optimal spatial sampling interval. Seismic reflectivity inversion is a deconvolution process which compresses the seismic wavelet and retrieves the reflectivity series from seismic records. It is a key technique for further inversion, as seismic reflectivity series are required to retrieve impedance and other elastic parameters. Sparseness is an important feature of the reflectivity series. Under the sparseness assumption, the location of a reflectivity indicates the position of an impedance contrast interface, and the amplitude indicates the reflection energy. When using L1 regulation as sparseness constraint, inverse problem becomes nonlinear. Therefore, it is presented as a Basis Pursuit Denosing (BPDN) or Least Absolute Shrinkage and Selection Operator (LASSO) optimal problem and solved by spectral projected gradient (SPG) algorithm. Migration is a key technique to image Earth’s subsurface structures by moving dipping reflections to their true subsurface locations and collapsing diffractions. Reverse time migration (RTM) is a depth migration method which constructs wavefields along the time axis. RTM extrapolates wavefields using a two-way wave equation in the time-space domain, and uses the adjoint operator, instead of the inverse operator, to migrate the record. To improve the signal-to-noise ratio and the resolution of RTM images, RTM may be implemented as a least-squares inverse problem with L1 norm constraint. In this way, the advantages of RTM itself, least-squares RTM, and L1 regularization are utilized to obtain a high-resolution, two-way wave equation-based depth migration image.Open Acces
    • …
    corecore