30 research outputs found
CTprintNet: An Accurate and Stable Deep Unfolding Approach for Few-View CT Reconstruction
In this paper, we propose a new deep learning approach based on unfolded neural networks for the reconstruction of X-ray computed tomography images from few views. We start from a model-based approach in a compressed sensing framework, described by the minimization of a least squares function plus an edge-preserving prior on the solution. In particular, the proposed network automatically estimates the internal parameters of a proximal interior point method for the solution of the optimization problem. The numerical tests performed on both a synthetic and a real dataset show the effectiveness of the framework in terms of accuracy and robustness with respect to noise on the input sinogram when compared to other different data-driven approaches
Image Deblurring : Comparing the Performance of Analytical and Learning Methods
Blurring is a common phenomenon during image formation due to various factors like motion between the camera and the object, or atmospheric turbulence, or when the camera fails to have the object in focus, which leads to degradation in the image formation process. This leads to the pixels interacting with the neighboring ones, and the captured image is blurry as a result. This interaction with the neighboring pixels, is the 'spread' which is represented by the Point Spread Function. Image deblurring has many applications, for example in Astronomy, medical imaging, where extracting the exact image required might not be possible due to various limiting factors, and what we get is a deformed image. In such cases, it is necessary to use an apt deblurring algorithm keeping all necessary factors like performance and time in mind. This thesis analyzes the performance of learning and analytical methods in Image deblurring Algorithm.
Inverse problems would be discussed at first, and how ill posed inverse problems like image deblurring cannot be tackled by naive deconvolution. This is followed by looking at the need for regularization, and how it is necessary to control the fluctuations resulting from extreme sensitivity to noise.
The Image reconstruction problem has the form of a convex variational problem, and its prior knowledge acting as the inequality constraints which creates a feasible region for the optimal solution. Interior point methods iterates over and over within this feasible region. This thesis uses the iRestNet Method, which uses the Forward Backward iterative approach for the Machine learning algorithm, and Total Variation approach implemented using the FlexBox tool for analytical method, which uses the Primal Dual approach.
The performance is measured using SSIM indices for a range of kernels, the SSIM map is also analyzed for comparing the deblurring efficiency
A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding
We introduce a general framework for designing and training neural network
layers whose forward passes can be interpreted as solving non-smooth convex
optimization problems, and whose architectures are derived from an optimization
algorithm. We focus on convex games, solved by local agents represented by the
nodes of a graph and interacting through regularization functions. This
approach is appealing for solving imaging problems, as it allows the use of
classical image priors within deep models that are trainable end to end. The
priors used in this presentation include variants of total variation, Laplacian
regularization, bilateral filtering, sparse coding on learned dictionaries, and
non-local self similarities. Our models are fully interpretable as well as
parameter and data efficient. Our experiments demonstrate their effectiveness
on a large diversity of tasks ranging from image denoising and compressed
sensing for fMRI to dense stereo matching.Comment: NeurIPS 202
Learning Variational Models with Unrolling and Bilevel Optimization
In this paper we consider the problem of learning variational models in the
context of supervised learning via risk minimization. Our goal is to provide a
deeper understanding of the two approaches of learning of variational models
via bilevel optimization and via algorithm unrolling. The former considers the
variational model as a lower level optimization problem below the risk
minimization problem, while the latter replaces the lower level optimization
problem by an algorithm that solves said problem approximately. Both approaches
are used in practice, but unrolling is much simpler from a computational point
of view. To analyze and compare the two approaches, we consider a simple toy
model, and compute all risks and the respective estimators explicitly. We show
that unrolling can be better than the bilevel optimization approach, but also
that the performance of unrolling can depend significantly on further
parameters, sometimes in unexpected ways: While the stepsize of the unrolled
algorithm matters a lot (and learning the stepsize gives a significant
improvement), the number of unrolled iterations plays a minor role
Graph Signal Restoration Using Nested Deep Algorithm Unrolling
Graph signal processing is a ubiquitous task in many applications such as
sensor, social, transportation and brain networks, point cloud processing, and
graph neural networks. Graph signals are often corrupted through sensing
processes, and need to be restored for the above applications. In this paper,
we propose two graph signal restoration methods based on deep algorithm
unrolling (DAU). First, we present a graph signal denoiser by unrolling
iterations of the alternating direction method of multiplier (ADMM). We then
propose a general restoration method for linear degradation by unrolling
iterations of Plug-and-Play ADMM (PnP-ADMM). In the second method, the unrolled
ADMM-based denoiser is incorporated as a submodule. Therefore, our restoration
method has a nested DAU structure. Thanks to DAU, parameters in the proposed
denoising/restoration methods are trainable in an end-to-end manner. Since the
proposed restoration methods are based on iterations of a (convex) optimization
algorithm, the method is interpretable and keeps the number of parameters small
because we only need to tune graph-independent regularization parameters. We
solve two main problems in existing graph signal restoration methods: 1)
limited performance of convex optimization algorithms due to fixed parameters
which are often determined manually. 2) large number of parameters of graph
neural networks that result in difficulty of training. Several experiments for
graph signal denoising and interpolation are performed on synthetic and
real-world data. The proposed methods show performance improvements to several
existing methods in terms of root mean squared error in both tasks
Recommended from our members
Learning-based Optimization for Signal and Image Processing
Incorporating machine learning techniques into optimization problems and solvers attracts increasing attention. Given a particular type of optimization problem that needs to be solved repeatedly, machine learning techniques can find some features for this category of optimization and develop algorithms with excellent performance. This thesis deals with algorithms and convergence analysis in learning-based optimization in three aspects: learning dictionaries, learning optimization solvers and learning regularizers.Learning dictionaries for sparse coding is significant for signal processing. Convolutional sparse coding is a form of sparse coding with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in the batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. I proposed two online convolutional dictionary learning algorithms that offered far better scaling of memory and computational cost than batch methods and provided a rigorous theoretical analysis of these methods.Learning fast solvers for optimization is a rising research topic. In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. I studied unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery and established its convergence. Based on the properties of parameters required by convergence, the model can be significantly simplified and, consequently, has much less training cost and better recovery performance.Learning regularizers or priors improves the performance of optimization solvers, especially for signal and image processing tasks. Plug-and-play (PnP) is a non-convex framework that integrates modern priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this thesis, the theoretical convergence of PnP-FBS and PnP-ADMM was established, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. Furthermore, real spectral normalization was proposed for training deep learning-based denoisers to satisfy the proposed Lipschitz condition
Equivariant Hypergraph Diffusion Neural Operators
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs
provide a promising way to model higher-order relations in data and further
solve relevant prediction tasks built upon such higher-order relations.
However, higher-order relations in practice contain complex patterns and are
often highly irregular. So, it is often challenging to design an HNN that
suffices to express those relations while keeping computational efficiency.
Inspired by hypergraph diffusion algorithms, this work proposes a new HNN
architecture named ED-HNN, which provably represents any continuous equivariant
hypergraph diffusion operators that can model a wide range of higher-order
relations. ED-HNN can be implemented efficiently by combining star expansions
of hypergraphs with standard message passing neural networks. ED-HNN further
shows great superiority in processing heterophilic hypergraphs and constructing
deep models. We evaluate ED-HNN for node classification on nine real-world
hypergraph datasets. ED-HNN uniformly outperforms the best baselines over these
nine datasets and achieves more than 2\% in prediction accuracy over
four datasets therein.Comment: Code: https://github.com/Graph-COM/ED-HN