Standard regularization methods that are used to compute solutions to
ill-posed inverse problems require knowledge of the forward model. In many
real-life applications, the forward model is not known, but training data is
readily available. In this paper, we develop a new framework that uses training
data, as a substitute for knowledge of the forward model, to compute an optimal
low-rank regularized inverse matrix directly, allowing for very fast
computation of a regularized solution. We consider a statistical framework
based on Bayes and empirical Bayes risk minimization to analyze theoretical
properties of the problem. We propose an efficient rank update approach for
computing an optimal low-rank regularized inverse matrix for various error
measures. Numerical experiments demonstrate the benefits and potential
applications of our approach to problems in signal and image processing.Comment: 24 pages, 11 figure