2 research outputs found
Bilevel learning of regularization models and their discretization for image deblurring and super-resolution
Bilevel learning is a powerful optimization technique that has extensively
been employed in recent years to bridge the world of model-driven variational
approaches with data-driven methods. Upon suitable parametrization of the
desired quantities of interest (e.g., regularization terms or discretization
filters), such approach computes optimal parameter values by solving a nested
optimization problem where the variational model acts as a constraint. In this
work, we consider two different use cases of bilevel learning for the problem
of image restoration. First, we focus on learning scalar weights and
convolutional filters defining a Field of Experts regularizer to restore
natural images degraded by blur and noise. For improving the practical
performance, the lower-level problem is solved by means of a gradient descent
scheme combined with a line-search strategy based on the Barzilai-Borwein rule.
As a second application, the bilevel setup is employed for learning a
discretization of the popular total variation regularizer for solving image
restoration problems (in particular, deblurring and super-resolution).
Numerical results show the effectiveness of the approach and their
generalization to multiple tasks.Comment: Acknowledgments correcte
Biomedical Image Classification via Dynamically Early Stopped Artificial Neural Network
It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. In particular, image classification represents one of the main problems in the biomedical imaging context. Due to the data complexity, biomedical image classification can be carried out by trainable mathematical models, such as artificial neural networks. When employing a neural network, one of the main challenges is to determine the optimal duration of the training phase to achieve the best performance. This paper introduces a new adaptive early stopping technique to set the optimal training time based on dynamic selection strategies to fix the learning rate and the mini-batch size of the stochastic gradient method exploited as the optimizer. The numerical experiments, carried out on different artificial neural networks for image classification, show that the developed adaptive early stopping procedure leads to the same literature performance while finalizing the training in fewer epochs. The numerical examples have been performed on the CIFAR100 dataset and on two distinct MedMNIST2D datasets which are the large-scale lightweight benchmark for biomedical image classification