88 research outputs found
Optimization Guarantees for ISTA and ADMM Based Unfolded Networks
Recently, unfolding techniques have been widely utilized to solve the inverse problems in various applications. In this paper, we study optimization guarantees for two popular unfolded networks, i.e., unfolded networks derived from iterative soft thresholding algorithms (ISTA) and derived from Alternating Direction Method of Multipliers (ADMM). Our guarantees–leveraging the Polyak-Lojasiewicz* (PL*) condition–state that the training (empirical) loss decreases to zero with the increase in the number of gradient descent epochs provided that the number of training samples is less than some threshold that depends on various quantities underlying the desired information processing task. Our guarantees also show that this threshold is larger for unfolded ISTA in comparison to unfolded ADMM, suggesting that there are certain regimes of number of training samples where the training error of unfolded ADMM does not converge to zero whereas the training error of unfolded ISTA does. A number of numerical results are provided backing up our theoretical findings
ATASI-Net: An Efficient Sparse Reconstruction Network for Tomographic SAR Imaging with Adaptive Threshold
Tomographic SAR technique has attracted remarkable interest for its ability
of three-dimensional resolving along the elevation direction via a stack of SAR
images collected from different cross-track angles. The emerged compressed
sensing (CS)-based algorithms have been introduced into TomoSAR considering its
super-resolution ability with limited samples. However, the conventional
CS-based methods suffer from several drawbacks, including weak noise
resistance, high computational complexity, and complex parameter fine-tuning.
Aiming at efficient TomoSAR imaging, this paper proposes a novel efficient
sparse unfolding network based on the analytic learned iterative shrinkage
thresholding algorithm (ALISTA) architecture with adaptive threshold, named
Adaptive Threshold ALISTA-based Sparse Imaging Network (ATASI-Net). The weight
matrix in each layer of ATASI-Net is pre-computed as the solution of an
off-line optimization problem, leaving only two scalar parameters to be learned
from data, which significantly simplifies the training stage. In addition,
adaptive threshold is introduced for each azimuth-range pixel, enabling the
threshold shrinkage to be not only layer-varied but also element-wise.
Moreover, the final learned thresholds can be visualized and combined with the
SAR image semantics for mutual feedback. Finally, extensive experiments on
simulated and real data are carried out to demonstrate the effectiveness and
efficiency of the proposed method
Recommended from our members
Learning-based Optimization for Signal and Image Processing
Incorporating machine learning techniques into optimization problems and solvers attracts increasing attention. Given a particular type of optimization problem that needs to be solved repeatedly, machine learning techniques can find some features for this category of optimization and develop algorithms with excellent performance. This thesis deals with algorithms and convergence analysis in learning-based optimization in three aspects: learning dictionaries, learning optimization solvers and learning regularizers.Learning dictionaries for sparse coding is significant for signal processing. Convolutional sparse coding is a form of sparse coding with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in the batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. I proposed two online convolutional dictionary learning algorithms that offered far better scaling of memory and computational cost than batch methods and provided a rigorous theoretical analysis of these methods.Learning fast solvers for optimization is a rising research topic. In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. I studied unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery and established its convergence. Based on the properties of parameters required by convergence, the model can be significantly simplified and, consequently, has much less training cost and better recovery performance.Learning regularizers or priors improves the performance of optimization solvers, especially for signal and image processing tasks. Plug-and-play (PnP) is a non-convex framework that integrates modern priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this thesis, the theoretical convergence of PnP-FBS and PnP-ADMM was established, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. Furthermore, real spectral normalization was proposed for training deep learning-based denoisers to satisfy the proposed Lipschitz condition
Theoretical Perspectives on Deep Learning Methods in Inverse Problems
In recent years, there have been significant advances
in the use of deep learning methods in inverse problems such as
denoising, compressive sensing, inpainting, and super-resolution.
While this line of works has predominantly been driven by
practical algorithms and experiments, it has also given rise to
a variety of intriguing theoretical problems. In this paper, we
survey some of the prominent theoretical developments in this line
of works, focusing in particular on generative priors, untrained
neural network priors, and unfolding algorithms. In addition to
summarizing existing results in these topics, we highlight several
ongoing challenges and open problems
- …