6 research outputs found
Recommended from our members
SALSA-Net: Explainable Deep Unrolling Networks for Compressed Sensing
Data Availability Statement: Not applicable.Copyright © 2023 by the authors. Deep unrolling networks (DUNs) have emerged as a promising approach for solving compressed sensing (CS) problems due to their superior explainability, speed, and performance compared to classical deep network models. However, the CS performance in terms of efficiency and accuracy remains a principal challenge for approaching further improvements. In this paper, we propose a novel deep unrolling model, SALSA-Net, to solve the image CS problem. The network architecture of SALSA-Net is inspired by unrolling and truncating the split augmented Lagrangian shrinkage algorithm (SALSA) which is used to solve sparsity-induced CS reconstruction problems. SALSA-Net inherits the interpretability of the SALSA algorithm while incorporating the learning ability and fast reconstruction speed of deep neural networks. By converting the SALSA algorithm into a deep network structure, SALSA-Net consists of a gradient update module, a threshold denoising module, and an auxiliary update module. All parameters, including the shrinkage thresholds and gradient steps, are optimized through end-to-end learning and are subject to forward constraints to ensure faster convergence. Furthermore, we introduce learned sampling to replace traditional sampling methods so that the sampling matrix can better preserve the feature information of the original signal and improve sampling efficiency. Experimental results demonstrate that SALSA-Net achieves significant reconstruction performance compared to state-of-the-art methods while inheriting the advantages of explainable recovery and high speed from the DUNs paradigm
Learning to Warm-Start Fixed-Point Optimization Algorithms
We introduce a machine-learning framework to warm-start fixed-point
optimization algorithms. Our architecture consists of a neural network mapping
problem parameters to warm starts, followed by a predefined number of
fixed-point iterations. We propose two loss functions designed to either
minimize the fixed-point residual or the distance to a ground truth solution.
In this way, the neural network predicts warm starts with the end-to-end goal
of minimizing the downstream loss. An important feature of our architecture is
its flexibility, in that it can predict a warm start for fixed-point algorithms
run for any number of steps, without being limited to the number of steps it
has been trained on. We provide PAC-Bayes generalization bounds on unseen data
for common classes of fixed-point operators: contractive, linearly convergent,
and averaged. Applying this framework to well-known applications in control,
statistics, and signal processing, we observe a significant reduction in the
number of iterations and solution time required to solve these problems,
through learned warm starts