12 research outputs found

    Wavefield recovery with limited-subspace weighted matrix factorizations

    Full text link
    Modern-day seismic imaging and monitoring technology increasingly rely on dense full-azimuth sampling. Unfortunately, the costs of acquiring densely sampled data rapidly become prohibitive and we need to look for ways to sparsely collect data, e.g. from sparsely distributed ocean bottom nodes, from which we then derive densely sampled surveys through the method of wavefield reconstruction. Because of their relatively cheap and simple calculations, wavefield reconstruction via matrix factorizations has proven to be a viable and scalable alternative to the more generally used transform-based methods. While this method is capable of processing all full azimuth data frequency by frequency slice, its performance degrades at higher frequencies because monochromatic data at these frequencies is not as well approximated by low-rank factorizations. We address this problem by proposing a recursive recovery technique, which involves weighted matrix factorizations where recovered wavefields at the lower frequencies serve as prior information for the recovery of the higher frequencies. To limit the adverse effects of potential overfitting, we propose a limited-subspace recursively weighted matrix factorization approach where the size of the row and column subspaces to construct the weight matrices is constrained. We apply our method to data collected from the Gulf of Suez, and our results show that our limited-subspace weighted recovery method significantly improves the recovery quality

    Transfer learning in large-scale ocean bottom seismic wavefield reconstruction

    Full text link
    Achieving desirable receiver sampling in ocean bottom acquisition is often not possible because of cost considerations. Assuming adequate source sampling is available, which is achievable by virtue of reciprocity and the use of modern randomized (simultaneous-source) marine acquisition technology, we are in a position to train convolutional neural networks (CNNs) to bring the receiver sampling to the same spatial grid as the dense source sampling. To accomplish this task, we form training pairs consisting of densely sampled data and artificially subsampled data using a reciprocity argument and the assumption that the source-site sampling is dense. While this approach has successfully been used on the recovery monochromatic frequency slices, its application in practice calls for wavefield reconstruction of time-domain data. Despite having the option to parallelize, the overall costs of this approach can become prohibitive if we decide to carry out the training and recovery independently for each frequency. Because different frequency slices share information, we propose the use the method of transfer training to make our approach computationally more efficient by warm starting the training with CNN weights obtained from a neighboring frequency slices. If the two neighboring frequency slices share information, we would expect the training to improve and converge faster. Our aim is to prove this principle by carrying a series of carefully selected experiments on a relatively large-scale five-dimensional data synthetic data volume associated with wide-azimuth 3D ocean bottom node acquisition. From these experiments, we observe that by transfer training we are able t significantly speedup in the training, specially at relatively higher frequencies where consecutive frequency slices are more correlated

    Multi-weight Nuclear Norm Minimization for Low-rank Matrix Recovery in Presence of Subspace Prior Information

    Full text link
    Weighted nuclear norm minimization has been recently recognized as a technique for reconstruction of a low-rank matrix from compressively sampled measurements when some prior information about the column and row subspaces of the matrix is available. In this work, we study the recovery conditions and the associated recovery guarantees of weighted nuclear norm minimization when multiple weights are allowed. This setup might be used when one has access to prior subspaces forming multiple angles with the column and row subspaces of the ground-truth matrix. While existing works in this field use a single weight to penalize all the angles, we propose a multi-weight problem which is designed to penalize each angle independently using a distinct weight. Specifically, we prove that our proposed multi-weight problem is stable and robust under weaker conditions for the measurement operator than the analogous conditions for single-weight scenario and standard nuclear norm minimization. Moreover, it provides better reconstruction error than the state of the art methods. We illustrate our results with extensive numerical experiments that demonstrate the advantages of allowing multiple weights in the recovery procedure

    A Greedy Algorithm for Matrix Recovery with Subspace Prior Information

    Full text link
    Matrix recovery is the problem of recovering a low-rank matrix from a few linear measurements. Recently, this problem has gained a lot of attention as it is employed in many applications such as Netflix prize problem, seismic data interpolation and collaborative filtering. In these applications, one might access to additional prior information about the column and row spaces of the matrix. These extra information can potentially enhance the matrix recovery performance. In this paper, we propose an efficient greedy algorithm that exploits prior information in the recovery procedure. The performance of the proposed algorithm is measured in terms of the rank restricted isometry property (R-RIP). Our proposed algorithm with prior subspace information converges under a more milder condition on the R-RIP in compared with the case that we do not use prior information. Additionally, our algorithm performs much better than nuclear norm minimization in terms of both computational complexity and success rate

    Limitations of Implicit Bias in Matrix Sensing: Initialization Rank Matters

    Full text link
    In matrix sensing, we first numerically identify the sensitivity to the initialization rank as a new limitation of the implicit bias of gradient flow. We will partially quantify this phenomenon mathematically, where we establish that the gradient flow of the empirical risk is implicitly biased towards low-rank outcomes and successfully learns the planted low-rank matrix, provided that the initialization is low-rank and within a specific "capture neighborhood". This capture neighborhood is far larger than the corresponding neighborhood in local refinement results; the former contains all models with zero training error whereas the latter is a small neighborhood of a model with zero test error. These new insights enable us to design an alternative algorithm for matrix sensing that complements the high-rank and near-zero initialization scheme which is predominant in the existing literature

    Optimal Weighted Low-rank Matrix Recovery with Subspace Prior Information

    Full text link
    Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior information about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization

    Matrix Completion with Prior Subspace Information via Maximizing Correlation

    Full text link
    This paper studies the problem of completing a low-rank matrix from a few of its random entries with the aid of prior information. We suggest a strategy to incorporate prior information into the standard matrix completion procedure by maximizing the correlation between the original signal and the prior information. We also establish performance guarantees for the proposed method, which show that with suitable prior information, the proposed procedure can reduce the sample complexity of the standard matrix completion by a logarithmic factor. To illustrate the theory, we further analyze an important practical application where the prior subspace information is available. Both synthetic and real-world experiments are provided to verify the validity of the theory.Comment: 13 pages, 6 figure

    Projection-based QLP Algorithm for Efficiently Computing Low-Rank Approximation of Matrices

    Full text link
    Matrices with low numerical rank are omnipresent in many signal processing and data analysis applications. The pivoted QLP (p-QLP) algorithm constructs a highly accurate approximation to an input low-rank matrix. However, it is computationally prohibitive for large matrices. In this paper, we introduce a new algorithm termed Projection-based Partial QLP (PbP-QLP) that efficiently approximates the p-QLP with high accuracy. Fundamental in our work is the exploitation of randomization and in contrast to the p-QLP, PbP-QLP does not use the pivoting strategy. As such, PbP-QLP can harness modern computer architectures, even better than competing randomized algorithms. The efficiency and effectiveness of our proposed PbP-QLP algorithm are investigated through various classes of synthetic and real-world data matrices

    Nonconvex Matrix Completion with Linearly Parameterized Factors

    Full text link
    Techniques of matrix completion aim to impute a large portion of missing entries in a data matrix through a small portion of observed ones, with broad machine learning applications including collaborative filtering, pairwise ranking, etc. In practice, additional structures are usually employed in order to improve the accuracy of matrix completion. Examples include subspace constraints formed by side information in collaborative filtering, and skew symmetry in pairwise ranking. This paper performs a unified analysis of nonconvex matrix completion with linearly parameterized factorization, which covers the aforementioned examples as special cases. Importantly, uniform upper bounds for estimation errors are established for all local minima, provided that the sampling rate satisfies certain conditions determined by the rank, condition number, and incoherence parameter of the ground-truth low rank matrix. Empirical efficiency of the proposed method is further illustrated by numerical simulations

    Frank-Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning

    Full text link
    The Frank-Wolfe (FW) method is a popular algorithm for solving large-scale convex optimization problems appearing in structured statistical learning. However, the traditional Frank-Wolfe method can only be applied when the feasible region is bounded, which limits its applicability in practice. Motivated by two applications in statistical learning, the β„“1\ell_1 trend filtering problem and matrix optimization problems with generalized nuclear norm constraints, we study a family of convex optimization problems where the unbounded feasible region is the direct sum of an unbounded linear subspace and a bounded constraint set. We propose two new Frank-Wolfe methods: unbounded Frank-Wolfe method (uFW) and unbounded Away-Step Frank-Wolfe method (uAFW), for solving a family of convex optimization problems with this class of unbounded feasible regions. We show that under proper regularity conditions, the unbounded Frank-Wolfe method has a O(1/k)O(1/k) sublinear convergence rate, and unbounded Away-Step Frank-Wolfe method has a linear convergence rate, matching the best-known results for the Frank-Wolfe method when the feasible region is bounded. Furthermore, computational experiments indicate that our proposed methods appear to outperform alternative solvers.Comment: 31 pages, 6 figure
    corecore