16 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    A Deconvolution Framework with Applications in Medical and Biological Imaging

    Get PDF
    A deconvolution framework is presented in this thesis and applied to several problems in medical and biological imaging. The framework is designed to contain state of the art deconvolution methods, to be easily expandable and to combine different components arbitrarily. Deconvolution is an inverse problem and in order to cope with its ill-posed nature, suitable regularization techniques and additional restrictions are required. A main objective of deconvolution methods is to restore degraded images acquired by fluorescence microscopy which has become an important tool in biological and medical sciences. Fluorescence microscopy images are degraded by out-of-focus blurring and noise and the deconvolution algorithms to restore these images are usually called deblurring methods. Many deblurring methods were proposed to restore these images in the last decade which are part of the deconvolution framework. In addition, existing deblurring techniques are improved and new components for the deconvolution framework are developed. A considerable improvement could be obtained by combining a state of the art regularization technique with an additional non-negativity constraint. A real biological screen analysing a specific protein in human cells is presented and shows the need to analyse structural information of fluorescence images. Such an analysis requires a good image quality which is the aim of the deblurring methods if the required image quality is not given. For a reliable understanding of cells and cellular processes, high resolution 3D images of the investigated cells are necessary. However, the ability of fluorescence microscopes to image a cell in 3D is limited since the resolution along the optical axis is by a factor of three worse than the transversal resolution. Standard microscopy image deblurring techniques are able to improve the resolution but the problem of a lower resolution in direction along the optical axis remains. It is however possible to overcome this problem using Axial Tomography providing tilted views of the object by rotating it under the microscope. The rotated images contain additional information about the objects which can be used to improve the resolution along the optical axis. In this thesis, a sophisticated method to reconstruct a high resolution Axial Tomography image on basis of the developed deblurring methods is presented. The deconvolution methods are also used to reconstruct the dose distribution in proton therapy on basis of measured PET images. Positron emitters are activated by proton beams but a PET image is not directly proportional to the delivered radiation dose distribution. A PET signal can be predicted by a convolution of the planned dose with specific filter functions. In this thesis, a dose reconstruction method based on PET images which reverses the convolution approach is presented and the potential to reconstruct the actually delivered dose distribution from measured PET images is investigated. Last but not least, a new denoising method using higher-order statistic information of a given Gaussian noise signal is presented and compared to state of the art denoising methods

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Video event detection and visual data pro cessing for multimedia applications

    Get PDF
    Cette thèse (i) décrit une procédure automatique pour estimer la condition d'arrêt des méthodes de déconvolution itératives basées sur un critère d'orthogonalité du signal estimé et de son gradient à une itération donnée; (ii) présente une méthode qui décompose l'image en une partie géométrique (ou "cartoon") et une partie "texture" en utilisation une estimation de paramètre et une condition d'arrêt basées sur la diffusion anisotropique avec orthogonalité, en utilisant le fait que ces deux composantes. "cartoon" et "texture", doivent être indépendantes; (iii) décrit une méthode pour extraire d'une séquence vidéo obtenue à partir de caméra portable les objets de premier plan en mouvement. Cette méthode augmente la compensation de mouvement de la caméra par une nouvelle estimation basée noyau de la fonction de probabilité de densité des pixels d'arrière-plan. Les méthodes présentées ont été testées et comparées aux algorithmes de l'état de l'art.This dissertation (i) describes an automatic procedure for estimating the stopping condition of non-regularized iterative deconvolution methods based on an orthogonality criterion of the estimated signal and its gradient at a given iteration; (ii) presents a decomposition method that splits the image into geometric (or cartoon) and texture parts using anisotropic diffusion with orthogonality based parameter estimation and stopping condition, utilizing the theory that the cartoon and the texture components of an image should be independent of each other; (iii) describes a method for moving foreground object extraction in sequences taken by wearable camera, with strong motion, where the camera motion compensated frame differencing is enhanced with a novel kernel-based estimation of the probability density function of the background pixels. The presented methods have been thoroughly tested and compared to other similar algorithms from the state-of-the-art.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Variable metric line-search based methods for nonconvex optimization

    Get PDF
    L'obiettivo di questa tesi è quello di proporre nuovi metodi iterativi del prim'ordine per un'ampia classe di problemi di ottimizzazione non convessa, in cui la funzione obiettivo è data dalla somma di un termine differenziabile, eventualmente non convesso, e di uno convesso, eventualmente non differenziabile. Tali problemi sono frequenti in applicazioni scientifiche quali l'elaborazione numerica di immagini e segnali, in cui il primo termine gioca il ruolo di funzione di discrepanza tra il dato osservato e l'oggetto ricostruito, mentre il secondo è il termine di regolarizzazione, volto ad imporre alcune specifiche proprietà sull'oggetto desiderato. Il nostro approccio è duplice: da un lato, i metodi proposti vengono accelerati facendo uso di strategie adattive di selezione dei parametri coinvolti; dall'altro lato, la convergenza di tali metodi viene garantita imponendo, ad ogni iterazione, un'opportuna condizione di sufficiente decrescita della funzione obiettivo. Il nostro primo contributo consiste nella messa a punto di un nuovo metodo di tipo proximal-gradient, che alterna un passo del gradiente sulla parte differenziabile ad uno proximal sulla parte convessa, denominato Variable Metric Inexact Line-search based Algorithm (VMILA). Tale metodo è innovativo da più punti di vista. Innanzitutto, a differenza della maggior parte dei metodi proximal-gradient, VMILA permette di adottare una metrica variabile nel calcolo dell'operatore proximal con estrema libertà di scelta, imponendo soltanto che i parametri coinvolti appartengano a sottoinsiemi limitati degli spazi in cui vengono definiti. In secondo luogo, in VMILA il calcolo del punto proximal viene effettuato tramite un preciso criterio di inesattezza, che può essere concretamente implementato in alcuni casi di interesse. Questo aspetto assume una rilevante importanza ogni qualvolta l'operatore proximal non sia calcolabile in forma chiusa. Infine, le iterate di VMILA sono calcolate tramite una ricerca di linea inesatta lungo la direzione ammissibile e secondo una specifica condizione di sufficiente decrescita di tipo Armijo. Il secondo contributo di questa tesi è proposto in un caso particolare del problema di ottimizzazione precedentemente considerato, in cui si assume che il termine convesso sia dato dalla somma di un numero finito di funzioni indicatrici di insiemi chiusi e convessi. In altre parole, si considera il problema di minimizzare una funzione differenziabile in cui i vincoli sulle incognite hanno una struttura separabile. In letteratura, il metodo classico per affrontare tale problema è senza dubbio il metodo di Gauss-Seidel (GS) non lineare, dove la minimizzazione della funzione obiettivo è ciclicamente alternata su ciascun blocco di variabili del problema. In questa tesi, viene proposta una versione inesatta dello schema GS, denominata Cyclic Block Generalized Gradient Projection (CBGGP) method, in cui la minimizzazione parziale su ciascun blocco di variabili è realizzata mediante un numero finito di passi del metodo del gradiente proiettato. La novità nell'approccio proposto consiste nell'introduzione di metriche non euclidee nel calcolo del gradiente proiettato. Per entrambi i metodi si dimostra, senza alcuna ipotesi di convessità sulla funzione obiettivo, che ciascun punto di accumulazione della successione delle iterate è stazionario. Nel caso di VMILA, è invece possibile dimostrare la convergenza forte delle iterate ad un punto stazionario quando la funzione obiettivo soddisfa la disuguaglianza di Kurdyka-Lojasiewicz. Numerosi test numerici in problemi di elaborazione di immagini, quali la ricostruzione di immagini sfocate e rumorose, la compressione di immagini, la stima di fase in microscopia e la deconvoluzione cieca di immagini in astronomia, danno prova della flessibilità ed efficacia dei metodi proposti.The aim of this thesis is to propose novel iterative first order methods tailored for a wide class of nonconvex nondifferentiable optimization problems, in which the objective function is given by the sum of a differentiable, possibly nonconvex function and a convex, possibly nondifferentiable term. Such problems have become ubiquitous in scientific applications such as image or signal processing, where the first term plays the role of the fit-to-data term, describing the relation between the desired object and the measured data, whereas the second one is the penalty term, aimed at restricting the search of the object itself to those satisfying specific properties. Our approach is twofold: on one hand, we accelerate the proposed methods by making use of suitable adaptive strategies to choose the involved parameters; on the other hand, we ensure convergence by imposing a sufficient decrease condition on the objective function at each iteration. Our first contribution is the development of a novel proximal--gradient method denominated Variable Metric Inexact Line-search based Algorithm (VMILA). The proposed approach is innovative from several points of view. First of all, VMILA allows to adopt a variable metric in the computation of the proximal point with a relative freedom of choice. Indeed the only assumption that we make is that the parameters involved belong to bounded sets. This is unusual with respect to the state-of-the-art proximal-gradient methods, where the parameters are usually chosen by means of a fixed rule or tightly related to the Lipschitz constant of the problem. Second, we introduce an inexactness criterion for computing the proximal point which can be practically implemented in some cases of interest. This aspect assumes a relevant importance whenever the proximal operator is not available in a closed form, which is often the case. Third, the VMILA iterates are computed by performing a line-search along the feasible direction and according to a specific Armijo-like condition, which can be considered as an extension of the classical Armijo rule proposed in the context of differentiable optimization. The second contribution is given for a special instance of the previously considered optimization problem, where the convex term is assumed to be a finite sum of the indicator functions of closed, convex sets. In other words, we consider a problem of constrained differentiable optimization in which the constraints have a separable structure. The most suited method to deal with this problem is undoubtedly the nonlinear Gauss-Seidel (GS) or block coordinate descent method, where the minimization of the objective function is cyclically alternated on each block of variables of the problem. In this thesis, we propose an inexact version of the GS scheme, denominated Cyclic Block Generalized Gradient Projection (CBGGP) method, in which the partial minimization over each block of variables is performed inexactly by means of a fixed number of gradient projection steps. The novelty of the proposed approach consists in the introduction of non Euclidean metrics in the computation of the gradient projection. As for VMILA, the sufficient decrease of the function is imposed by means of a block version of the Armijo line-search. For both methods, we prove that each limit point of the sequence of iterates is stationary, without any convexity assumptions. In the case of VMILA, strong convergence of the iterates to a stationary point is also proved when the objective function satisfies the Kurdyka-Lojasiewicz property. Extensive numerical experience in image processing applications, such as image deblurring and denoising in presence of non-Gaussian noise, image compression, phase estimation and image blind deconvolution, shows the flexibility of our methods in addressing different nonconvex problems, as well as their ability to effectively accelerate the progress towards the solution of the treated problem

    Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos

    Get PDF
    High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs

    Optimization Methods for Image Regularization from Poisson Data

    Get PDF
    This work regards optimization techniques for image restoration problems in presence of Poisson noise. In several imaging applications (e.g. Astronomy, Microscopy, Medical Imaging) such noise is predominant; hence regularization techniques are needed in order to obtain satisfying restored images. In a variational framework, the image restoration problem consists in finding a minimum of a functional, which is the sum of two terms,: the fit–to–data and the regularization one. The trade–off between these two terms is measured by a regularization parameter. The estimation of such a parameter is very difficult due to the presence of Poisson noise. In this thesis we investigate three models regarding this parameter: a Discrepancy Model, Constrained Model and the Bregman procedure. The former two provide an estimation for the regularization parameter, but in some cases, such as low counts images, they do not allow to obtain satisfactory results. On the other hand, in presence of such images the Bregman procedure provides reliable results and, moreover, it allows to use an overestimation of the regularization parameter, giving satisfying restored images; furthermore, this procedure permits to gain a contrast enhancement on the final result. In the first part of the work, the basics on image restoration problems are recalled, and a survey on the state–of–the–art methods is given, with an original contribution regarding scaling techniques in ε–subgradient methods. Then, the Discrepancy and the Constrained Models are analyzed from both theoretical and practical point of view, developing suitable numerical techniques for their solution; furthermore, an inexact version of the Bregman procedure is introduced: such a version allows to have a minor computational cost and maintains the same theoretical features of the exact version. Finally, in the last part, a wide experimentation shows the computational efficiency of the inexact Bregman procedure; furthermore, the three models are compared, showing that in high counts images they provide similar results, while in case of low counts images the Bregman procedure provides reliable restored images. This last consideration is evident not only on test problems, but also in problems coming from Astronomy imaging, particularly in case of High Dynamic Range images, as shown in the final part of the experimental section

    Cosmic cartography

    Get PDF
    The cosmic origin and evolution is encoded in the large-scale matter distribution observed in astronomical surveys. Galaxy redshift surveys have become in the recent years one of the best probes for cosmic large-scale structures. They are complementary to other information sources like the cosmic microwave background, since they trace a different epoch of the Universe, the time after reionization at which the Universe became transparent, covering about the last twelve billion years. Regarding that the Universe is about thirteen billion years old, galaxy surveys cover a huge range of time, even if the sensitivity limitations of the detectors do not permit to reach the furthermost sources in the transparent Universe. This makes galaxy surveys extremely interesting for cosmological evolution studies. The observables, galaxy position in the sky, galaxy ma gnitude and redshift, however, give an incomplete representation of the real structures in the Universe, not only due to the limitations and uncertainties in the measurements, but also due to their biased nature. They trace the underlying continuous dark matter field only partially being a discrete sample of the luminous baryonic distribution. In addition, galaxy catalogues are plagued by many complications. Some have a physical foundation, as mentioned before, others are due to the observation process. The problem of reconstructing the underlying density field, which permits to make cosmological studies, thus requires a statistical approach. This thesis describes a cosmic cartography project. The necessary concepts, mathematical frame-work, and numerical algorithms are thoroughly analyzed. On that basis a Bayesian software tool is implemented. The resulting Argo-code allows to investigate the characteristics of the large-scale cosmological structure with unprecedented accuracy and flexibility. This is achieved by jointly estimating the large-scale density along with a variety of other parameters ---such as the cosmic flow, the small-scale peculiar velocity field, and the power-spectrum--- from the information provided by galaxy redshift surveys. Furthermore, Argo is capable of dealing with many observational issues like mask-effects, galaxy selection criteria, blurring and noise in a very efficient implementation of an operator based formalism which was carefully derived for this purpose. Thanks to the achieved high efficiency of Argo the application of iterative sampling algorithms based on Markov Chain Monte Carlo is now possible. This will ultimately lead to a full description of the matter distribution with all its relevant parameters like velocities, power spectra, galaxy bias, etc., including the associated uncertainties. Some applications are shown, in which such techniques are used. A rejection sampling scheme is successfully applied to correct for the observational redshift-distortions effect which is especially severe in regimes of non-linear structure formation, causing the so-called finger-of-god effect. Also a Gibbs-sampling algorithm for power-spectrum determination is presented and some preliminary results are shown in which the correct level and shape of the power-spectrum is recovered solely from the data. We present in an additional appendix the gravitational collapse and subsequent neutrino-driven explosion of the low-mass end of stars that undergo core-collapse Supernovae. We obtain results which are for the first time compatible with the Crab Nebula

    Méthodes de gradient de Bregman pour problèmes à régularité relative

    Get PDF
    En apprentissage statistique et traitement du signal, de nombreuses tâches se formulent sous la forme d’un problème d’optimisation de grande taille. Dans ce contexte, les méthodes du premier ordre, qui utilisent uniquement l’information apportée par le gradient de la fonction objectif, sont privilégiées en raison de leur faible coût par itération et de leur simplicité. Nous étudions dans cette thèse les méthodes du premier ordre à distance de Bregman, qui constituent une généralisation de la célèbre méthode de descente de gradient. Cette généralisation consiste à remplacer la distance euclidienne par une distance plus générale, dite de Bregman, générée par une fonction convexe noyau suffisamment simple. La fonction noyau est choisie de manière à être adaptée à la géométrie de la fonction objectif au travers de la condition de régularité relative, introduite en 2017 par Bauschke, Bolte et Teboulle. Depuis son apparition, cette condition a fait naître de nouvelles perspectives en optimisation du premier ordre. Tout d’abord, nous appliquons les méthodes de Bregman aux problèmes d’optimisation sur des espaces de matrices de rang faible. En exploitant la structure matricielle et en utilisant la propriété de régularité relative, nous proposons des noyaux de Bregman qui permettent d’améliorer la performance numérique par rapport aux méthodes euclidiennes. Ensuite, nous nous penchons sur la complexité théorique de ces algorithmes. Un des problèmes les plus importants est de déterminer s’il existe une version accélérée de l’algorithme de gradient de Bregman qui possède un meilleur taux de convergence. Dans le cas général, nous démontrons que la réponse est négative : la complexité de la descente de gradient de Bregman standard ne peut pas être améliorée pour des noyaux génériques. La preuve repose sur un contre-exemple pathologique qui a été découvert au travers de méthodes d’analyses de pire cas par ordinateur. Nous évoquons aussi une tentative pour obtenir des résultats positifs d’accélération en spécialisant cette analyse dans le contexte plus restreint de la géométrie entropique. Enfin, nous étudions la version stochastique de l’algorithme de Bregman pour minimiser des fonctions sous la forme d’espérance, ainsi que des méthodes de réduction de variance lorsque la fonction objectif est une somme finie.We study large-scale optimization problems with applications to signal processing and machine learning. Such problems are typically solved with first-order methods that perform iterative updates using the gradient of the objective function. We focus on the class of Bregman first order methods, for which the direction of the gradient step is determined by the Bregman divergence induced by a convex kernel function. The choice of the kernel is guided by the relative smoothness condition, which requires the kernel to be compatible with the objective through a descent inequality. This condition was introduced recently by Bauschke, Bolte and Teboulle in 2017 and has opened new perspectives in first-order optimization. In the first part, we apply Bregman methods to minimization problems on the space of low-rank semidefinite matrices. By leveraging the matrix structure and using the relatives moothness property, we show that well-chosen Bregman kernels allow to improve performance over standard Euclidean methods. Then, we study the theoretical complexity of these algorithms. An important question is to determine whether there exists an accelerated version of Bregman gradient descent which achieves a better convergence rate in the same setting. In the general case, we show that the answer is negative as the complexity of the standard Bregman gradient method cannot be improved for generic kernels. The proof relies on a pathological example which was discovered by analyzing the worst-case behavior of Bregman methods with a computer-aided technique called performance estimation. We also detail an attempt towards improving the convergence speed in a more restricted setting, by specializing the performance estimation framework to the entropic geometry. Finally, we study a stochastic variant of Bregman gradient descent for expectation minimization problems, which are pervasive in machine learning, along with variance reduction methods for finite-sum objectives
    corecore