4,259 research outputs found

    Fast methods for denoising matrix completion formulations, with applications to robust seismic data interpolation

    Full text link
    Recent SVD-free matrix factorization formulations have enabled rank minimization for systems with millions of rows and columns, paving the way for matrix completion in extremely large-scale applications, such as seismic data interpolation. In this paper, we consider matrix completion formulations designed to hit a target data-fitting error level provided by the user, and propose an algorithm called LR-BPDN that is able to exploit factorized formulations to solve the corresponding optimization problem. Since practitioners typically have strong prior knowledge about target error level, this innovation makes it easy to apply the algorithm in practice, leaving only the factor rank to be determined. Within the established framework, we propose two extensions that are highly relevant to solving practical challenges of data interpolation. First, we propose a weighted extension that allows known subspace information to improve the results of matrix completion formulations. We show how this weighting can be used in the context of frequency continuation, an essential aspect to seismic data interpolation. Second, we propose matrix completion formulations that are robust to large measurement errors in the available data. We illustrate the advantages of LR-BPDN on the collaborative filtering problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use the new method, along with its robust and subspace re-weighted extensions, to obtain high-quality reconstructions for large scale seismic interpolation problems with real data, even in the presence of data contamination.Comment: 26 pages, 13 figure

    Convex recovery of continuous domain piecewise constant images from non-uniform Fourier samples

    Full text link
    We consider the recovery of a continuous domain piecewise constant image from its non-uniform Fourier samples using a convex matrix completion algorithm. We assume the discontinuities/edges of the image are localized to the zero levelset of a bandlimited function. This assumption induces linear dependencies between the Fourier coefficients of the image, which results in a two-fold block Toeplitz matrix constructed from the Fourier coefficients being low-rank. The proposed algorithm reformulates the recovery of the unknown Fourier coefficients as a structured low-rank matrix completion problem, where the nuclear norm of the matrix is minimized subject to structure and data constraints. We show that exact recovery is possible with high probability when the edge set of the image satisfies an incoherency property. We also show that the incoherency property is dependent on the geometry of the edge set curve, implying higher sampling burden for smaller curves. This paper generalizes recent work on the super-resolution recovery of isolated Diracs or signals with finite rate of innovation to the recovery of piecewise constant images.Comment: Supplementary material is attached with the main manuscrip

    Optimization on the Hierarchical Tucker manifold - applications to tensor completion

    Full text link
    In this work, we develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker (HT) tensors, an efficient structured tensor format based on recursive subspace factorizations. By exploiting the smooth manifold structure of these tensors, we construct standard optimization algorithms such as Steepest Descent and Conjugate Gradient for completing tensors from missing entries. Our algorithmic framework is fast and scalable to large problem sizes as we do not require SVDs on the ambient tensor space, as required by other methods. Moreover, we exploit the structure of the Gramian matrices associated with the HT format to regularize our problem, reducing overfitting for high subsampling ratios. We also find that the organization of the tensor can have a major impact on completion from realistic seismic acquisition geometries. These samplings are far from idealized randomized samplings that are usually considered in the literature but are realizable in practical scenarios. Using these algorithms, we successfully interpolate large-scale seismic data sets and demonstrate the competitive computational scaling of our algorithms as the problem sizes grow

    Applications of Compressed Sensing in Communications Networks

    Full text link
    This paper presents a tutorial for CS applications in communications networks. The Shannon's sampling theorem states that to recover a signal, the sampling rate must be as least the Nyquist rate. Compressed sensing (CS) is based on the surprising fact that to recover a signal that is sparse in certain representations, one can sample at the rate far below the Nyquist rate. Since its inception in 2006, CS attracted much interest in the research community and found wide-ranging applications from astronomy, biology, communications, image and video processing, medicine, to radar. CS also found successful applications in communications networks. CS was applied in the detection and estimation of wireless signals, source coding, multi-access channels, data collection in sensor networks, and network monitoring, etc. In many cases, CS was shown to bring performance gains on the order of 10X. We believe this is just the beginning of CS applications in communications networks, and the future will see even more fruitful applications of CS in our field.Comment: 18 page

    k-Space Deep Learning for Reference-free EPI Ghost Correction

    Full text link
    Nyquist ghost artifacts in EPI are originated from phase mismatch between the even and odd echoes. However, conventional correction methods using reference scans often produce erroneous results especially in high-field MRI due to the non-linear and time-varying local magnetic field changes. Recently, it was shown that the problem of ghost correction can be reformulated as k-space interpolation problem that can be solved using structured low-rank Hankel matrix approaches. Another recent work showed that data driven Hankel matrix decomposition can be reformulated to exhibit similar structures as deep convolutional neural network. By synergistically combining these findings, we propose a k-space deep learning approach that immediately corrects the phase mismatch without a reference scan in both accelerated and non-accelerated EPI acquisitions. To take advantage of the even and odd-phase directional redundancy, the k-space data is divided into two channels configured with even and odd phase encodings. The redundancies between coils are also exploited by stacking the multi-coil k-space data into additional input channels. Then, our k-space ghost correction network is trained to learn the interpolation kernel to estimate the missing virtual k-space data. For the accelerated EPI data, the same neural network is trained to directly estimate the interpolation kernels for missing k-space data from both ghost and subsampling. Reconstruction results using 3T and 7T in-vivo data showed that the proposed method outperformed the image quality compared to the existing methods, and the computing time is much faster.The proposed k-space deep learning for EPI ghost correction is highly robust and fast, and can be combined with acceleration, so that it can be used as a promising correction tool for high-field MRI without changing the current acquisition protocol.Comment: To appear in Magnetic Resonance in Medicin

    A Super-Resolution Framework for Tensor Decomposition

    Full text link
    This work considers a super-resolution framework for overcomplete tensor decomposition. Specifically, we view tensor decomposition as a super-resolution problem of recovering a sum of Dirac measures on the sphere and solve it by minimizing a continuous analog of the â„“1\ell_1 norm on the space of measures. The optimal value of this optimization defines the tensor nuclear norm. Similar to the separation condition in the super-resolution problem, by explicitly constructing a dual certificate, we develop incoherence conditions of the tensor factors so that they form the unique optimal solution of the continuous analog of â„“1\ell_1 norm minimization. Remarkably, the derived incoherence conditions are satisfied with high probability by random tensor factors uniformly distributed on the sphere, implying global identifiability of random tensor factors

    Deep Learning Methods for Parallel Magnetic Resonance Image Reconstruction

    Full text link
    Following the success of deep learning in a wide range of applications, neural network-based machine learning techniques have received interest as a means of accelerating magnetic resonance imaging (MRI). A number of ideas inspired by deep learning techniques from computer vision and image processing have been successfully applied to non-linear image reconstruction in the spirit of compressed sensing for both low dose computed tomography and accelerated MRI. The additional integration of multi-coil information to recover missing k-space lines in the MRI reconstruction process, is still studied less frequently, even though it is the de-facto standard for currently used accelerated MR acquisitions. This manuscript provides an overview of the recent machine learning approaches that have been proposed specifically for improving parallel imaging. A general background introduction to parallel MRI is given that is structured around the classical view of image space and k-space based methods. Both linear and non-linear methods are covered, followed by a discussion of recent efforts to further improve parallel imaging using machine learning, and specifically using artificial neural networks. Image-domain based techniques that introduce improved regularizers are covered as well as k-space based methods, where the focus is on better interpolation strategies using neural networks. Issues and open problems are discussed as well as recent efforts for producing open datasets and benchmarks for the community.Comment: 14 pages, 7 figure

    Reconstruction by Calibration over Tensors for Multi-Coil Multi-Acquisition Balanced SSFP Imaging

    Full text link
    Purpose: To develop a rapid imaging framework for balanced steady-state free precession (bSSFP) that jointly reconstructs undersampled data (by a factor of R) across multiple coils (D) and multiple acquisitions (N). To devise a multi-acquisition coil compression technique for improved computational efficiency. Methods: The bSSFP image for a given coil and acquisition is modeled to be modulated by a coil sensitivity and a bSSFP profile. The proposed reconstruction by calibration over tensors (ReCat) recovers missing data by tensor interpolation over the coil and acquisition dimensions. Coil compression is achieved using a new method based on multilinear singular value decomposition (MLCC). ReCat is compared with iterative self-consistent parallel imaging (SPIRiT) and profile encoding (PE-SSFP) reconstructions. Results: Compared to parallel imaging or profile-encoding methods, ReCat attains sensitive depiction of high-spatial-frequency information even at higher R. In the brain, ReCat improves peak SNR (PSNR) by 1.1±\pm1.0 dB over SPIRiT and by 0.9±\pm0.3 dB over PE-SSFP (mean±\pmstd across subjects; average for N=2-8, R=8-16). Furthermore, reconstructions based on MLCC achieve 0.8±\pm0.6 dB higher PSNR compared to those based on geometric coil compression (GCC) (average for N=2-8, R=4-16). Conclusion: ReCat is a promising acceleration framework for banding-artifact-free bSSFP imaging with high image quality; and MLCC offers improved computational efficiency for tensor-based reconstructions.Comment: To be published in Magnetic Resonance in Medicine. http://onlinelibrary.wiley.com/doi/10.1002/mrm.26902/abstrac

    Value function approximation via low-rank models

    Full text link
    We propose a novel value function approximation technique for Markov decision processes. We consider the problem of compactly representing the state-action value function using a low-rank and sparse matrix model. The problem is to decompose a matrix that encodes the true value function into low-rank and sparse components, and we achieve this using Robust Principal Component Analysis (PCA). Under minimal assumptions, this Robust PCA problem can be solved exactly via the Principal Component Pursuit convex optimization problem. We experiment the procedure on several examples and demonstrate that our method yields approximations essentially identical to the true function.Comment: arXiv admin note: substantial text overlap with arXiv:0912.3599 by other author

    Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm

    Full text link
    In this paper, we investigate tensor recovery problems within the tensor singular value decomposition (t-SVD) framework. We propose the partial sum of the tubal nuclear norm (PSTNN) of a tensor. The PSTNN is a surrogate of the tensor tubal multi-rank. We build two PSTNN-based minimization models for two typical tensor recovery problems, i.e., the tensor completion and the tensor principal component analysis. We give two algorithms based on the alternating direction method of multipliers (ADMM) to solve proposed PSTNN-based tensor recovery models. Experimental results on the synthetic data and real-world data reveal the superior of the proposed PSTNN
    • …
    corecore