92 research outputs found

    Application of an Alternating Minimization Algorithm to Experimental DIC Microscopy Data for the Quantitative Determination of Sample Optical Properties

    Get PDF
    Differential Interference Contrast (DIC) is commonly chosen for imaging unstained transparent samples. One limitation of DIC microscopy is the qualitative results it provides. This must be post-processed to extract meaningful information. The Alternating Minimizatio (AM) algorithm studied in this thesis is an iterative approach to recover a quantitative estimate of a sample\u27s complex-valued transmittance function. The AM algorithm is validated using simulated data. Additionally, the bias retardation and shear distance, two characteristic features of the DIC system, must be measured to insure the system model is accurate. This is accomplished by introducing a calibrated liquid crystal device to the system. Algorithm performance is verified using an experimental test object before finally being applied to biological samples. Overall results demonstrate the accuracy of this algorithm\u27s object estimation results. These are verified through comparison to similar data processing techniques

    Poisson noise reduction with non-local PCA

    Full text link
    Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.Comment: erratum: Image man is wrongly name pepper in the journal versio

    Variable metric line-search based methods for nonconvex optimization

    Get PDF
    L'obiettivo di questa tesi è quello di proporre nuovi metodi iterativi del prim'ordine per un'ampia classe di problemi di ottimizzazione non convessa, in cui la funzione obiettivo è data dalla somma di un termine differenziabile, eventualmente non convesso, e di uno convesso, eventualmente non differenziabile. Tali problemi sono frequenti in applicazioni scientifiche quali l'elaborazione numerica di immagini e segnali, in cui il primo termine gioca il ruolo di funzione di discrepanza tra il dato osservato e l'oggetto ricostruito, mentre il secondo è il termine di regolarizzazione, volto ad imporre alcune specifiche proprietà sull'oggetto desiderato. Il nostro approccio è duplice: da un lato, i metodi proposti vengono accelerati facendo uso di strategie adattive di selezione dei parametri coinvolti; dall'altro lato, la convergenza di tali metodi viene garantita imponendo, ad ogni iterazione, un'opportuna condizione di sufficiente decrescita della funzione obiettivo. Il nostro primo contributo consiste nella messa a punto di un nuovo metodo di tipo proximal-gradient, che alterna un passo del gradiente sulla parte differenziabile ad uno proximal sulla parte convessa, denominato Variable Metric Inexact Line-search based Algorithm (VMILA). Tale metodo è innovativo da più punti di vista. Innanzitutto, a differenza della maggior parte dei metodi proximal-gradient, VMILA permette di adottare una metrica variabile nel calcolo dell'operatore proximal con estrema libertà di scelta, imponendo soltanto che i parametri coinvolti appartengano a sottoinsiemi limitati degli spazi in cui vengono definiti. In secondo luogo, in VMILA il calcolo del punto proximal viene effettuato tramite un preciso criterio di inesattezza, che può essere concretamente implementato in alcuni casi di interesse. Questo aspetto assume una rilevante importanza ogni qualvolta l'operatore proximal non sia calcolabile in forma chiusa. Infine, le iterate di VMILA sono calcolate tramite una ricerca di linea inesatta lungo la direzione ammissibile e secondo una specifica condizione di sufficiente decrescita di tipo Armijo. Il secondo contributo di questa tesi è proposto in un caso particolare del problema di ottimizzazione precedentemente considerato, in cui si assume che il termine convesso sia dato dalla somma di un numero finito di funzioni indicatrici di insiemi chiusi e convessi. In altre parole, si considera il problema di minimizzare una funzione differenziabile in cui i vincoli sulle incognite hanno una struttura separabile. In letteratura, il metodo classico per affrontare tale problema è senza dubbio il metodo di Gauss-Seidel (GS) non lineare, dove la minimizzazione della funzione obiettivo è ciclicamente alternata su ciascun blocco di variabili del problema. In questa tesi, viene proposta una versione inesatta dello schema GS, denominata Cyclic Block Generalized Gradient Projection (CBGGP) method, in cui la minimizzazione parziale su ciascun blocco di variabili è realizzata mediante un numero finito di passi del metodo del gradiente proiettato. La novità nell'approccio proposto consiste nell'introduzione di metriche non euclidee nel calcolo del gradiente proiettato. Per entrambi i metodi si dimostra, senza alcuna ipotesi di convessità sulla funzione obiettivo, che ciascun punto di accumulazione della successione delle iterate è stazionario. Nel caso di VMILA, è invece possibile dimostrare la convergenza forte delle iterate ad un punto stazionario quando la funzione obiettivo soddisfa la disuguaglianza di Kurdyka-Lojasiewicz. Numerosi test numerici in problemi di elaborazione di immagini, quali la ricostruzione di immagini sfocate e rumorose, la compressione di immagini, la stima di fase in microscopia e la deconvoluzione cieca di immagini in astronomia, danno prova della flessibilità ed efficacia dei metodi proposti.The aim of this thesis is to propose novel iterative first order methods tailored for a wide class of nonconvex nondifferentiable optimization problems, in which the objective function is given by the sum of a differentiable, possibly nonconvex function and a convex, possibly nondifferentiable term. Such problems have become ubiquitous in scientific applications such as image or signal processing, where the first term plays the role of the fit-to-data term, describing the relation between the desired object and the measured data, whereas the second one is the penalty term, aimed at restricting the search of the object itself to those satisfying specific properties. Our approach is twofold: on one hand, we accelerate the proposed methods by making use of suitable adaptive strategies to choose the involved parameters; on the other hand, we ensure convergence by imposing a sufficient decrease condition on the objective function at each iteration. Our first contribution is the development of a novel proximal--gradient method denominated Variable Metric Inexact Line-search based Algorithm (VMILA). The proposed approach is innovative from several points of view. First of all, VMILA allows to adopt a variable metric in the computation of the proximal point with a relative freedom of choice. Indeed the only assumption that we make is that the parameters involved belong to bounded sets. This is unusual with respect to the state-of-the-art proximal-gradient methods, where the parameters are usually chosen by means of a fixed rule or tightly related to the Lipschitz constant of the problem. Second, we introduce an inexactness criterion for computing the proximal point which can be practically implemented in some cases of interest. This aspect assumes a relevant importance whenever the proximal operator is not available in a closed form, which is often the case. Third, the VMILA iterates are computed by performing a line-search along the feasible direction and according to a specific Armijo-like condition, which can be considered as an extension of the classical Armijo rule proposed in the context of differentiable optimization. The second contribution is given for a special instance of the previously considered optimization problem, where the convex term is assumed to be a finite sum of the indicator functions of closed, convex sets. In other words, we consider a problem of constrained differentiable optimization in which the constraints have a separable structure. The most suited method to deal with this problem is undoubtedly the nonlinear Gauss-Seidel (GS) or block coordinate descent method, where the minimization of the objective function is cyclically alternated on each block of variables of the problem. In this thesis, we propose an inexact version of the GS scheme, denominated Cyclic Block Generalized Gradient Projection (CBGGP) method, in which the partial minimization over each block of variables is performed inexactly by means of a fixed number of gradient projection steps. The novelty of the proposed approach consists in the introduction of non Euclidean metrics in the computation of the gradient projection. As for VMILA, the sufficient decrease of the function is imposed by means of a block version of the Armijo line-search. For both methods, we prove that each limit point of the sequence of iterates is stationary, without any convexity assumptions. In the case of VMILA, strong convergence of the iterates to a stationary point is also proved when the objective function satisfies the Kurdyka-Lojasiewicz property. Extensive numerical experience in image processing applications, such as image deblurring and denoising in presence of non-Gaussian noise, image compression, phase estimation and image blind deconvolution, shows the flexibility of our methods in addressing different nonconvex problems, as well as their ability to effectively accelerate the progress towards the solution of the treated problem

    A Variational Aggregation Framework for Patch-Based Optical Flow Estimation

    Get PDF
    International audienceWe propose a variational aggregation method for optical flow estimation. It consists of a two-step framework, first estimating a collection of parametric motion models to generate motion candidates, and then reconstructing a global dense motion field. The aggregation step is designed as a motion reconstruction problem from spatially varying sets of motion candidates given by parametric motion models. Our method is designed to capture large displacements in a variational framework without requiring any coarse-to-fine strategy. We handle occlusion with a motion inpainting approach in the candidates computation step. By performing parametric motion estimation, we combine the robustness to noise of local parametric methods with the accuracy yielded by global regularization. We demonstrate the performance of our aggregation approach by comparing it to standard variational methods and a discrete aggregation approach on the Middlebury and MPI Sintel datasets

    Robust Algorithms for Low-Rank and Sparse Matrix Models

    Full text link
    Data in statistical signal processing problems is often inherently matrix-valued, and a natural first step in working with such data is to impose a model with structure that captures the distinctive features of the underlying data. Under the right model, one can design algorithms that can reliably tease weak signals out of highly corrupted data. In this thesis, we study two important classes of matrix structure: low-rankness and sparsity. In particular, we focus on robust principal component analysis (PCA) models that decompose data into the sum of low-rank and sparse (in an appropriate sense) components. Robust PCA models are popular because they are useful models for data in practice and because efficient algorithms exist for solving them. This thesis focuses on developing new robust PCA algorithms that advance the state-of-the-art in several key respects. First, we develop a theoretical understanding of the effect of outliers on PCA and the extent to which one can reliably reject outliers from corrupted data using thresholding schemes. We apply these insights and other recent results from low-rank matrix estimation to design robust PCA algorithms with improved low-rank models that are well-suited for processing highly corrupted data. On the sparse modeling front, we use sparse signal models like spatial continuity and dictionary learning to develop new methods with important adaptive representational capabilities. We also propose efficient algorithms for implementing our methods, including an extension of our dictionary learning algorithms to the online or sequential data setting. The underlying theme of our work is to combine ideas from low-rank and sparse modeling in novel ways to design robust algorithms that produce accurate reconstructions from highly undersampled or corrupted data. We consider a variety of application domains for our methods, including foreground-background separation, photometric stereo, and inverse problems such as video inpainting and dynamic magnetic resonance imaging.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143925/1/brimoor_1.pd

    Image analysis and statistical modeling for applications in cytometry and bioprocess control

    Get PDF
    Today, signal processing has a central role in many of the advancements in systems biology. Modern signal processing is required to provide efficient computational solutions to unravel complex problems that are either arduous or impossible to obtain using conventional approaches. For example, imaging-based high-throughput experiments enable cells to be examined at even subcellular level yielding huge amount of image data. Cytometry is an integral part of such experiments and involves measurement of different cell parameters which requires extraction of quantitative experimental values from cell microscopy images. In order to do that for such large number of images, fast and accurate automated image analysis methods are required. In another example, modeling of bioprocesses and their scale-up is a challenging task where different scales have different parameters and often there are more variables than the available number of observations thus requiring special methodology. In many biomedical cell microscopy studies, it is necessary to analyze the images at single cell or even subcellular level since owing to the heterogeneity of cell populations the population-averaged measurements are often inconclusive. Moreover, the emergence of imaging-based high-content screening experiments, especially for drug design, has put single cell analysis at the forefront since it is required to study the dynamics of single-cell gene expressions for tracking and quantification of cell phenotypic variations. The ability to perform single cell analysis depends on the accuracy of image segmentation in detecting individual cells from images. However, clumping of cells at both nuclei and cytoplasm level hinders accurate cell image segmentation. Part of this thesis work concentrates on developing accurate automated methods for segmentation of bright field as well as multichannel fluorescence microscopy images of cells with an emphasis on clump splitting so that cells are separated from each other as well as from background. The complexity in bioprocess development and control crave for the usage of computational modeling and data analysis approaches for process optimization and scale-up. This is also asserted by the fact that obtaining a priori knowledge needed for the development of traditional scale-up criteria may at times be difficult. Moreover, employment of efficient process modeling may provide the added advantage of automatic identification of influential control parameters. Determination of the values of the identified parameters and the ability to predict them at different scales help in process control and in achieving their scale-up. Bioprocess modeling and control can also benefit from single cell analysis where the latter could add a new dimension to the former once imaging-based in-line sensors allow for monitoring of key variables governing the processes. In this thesis we exploited signal processing techniques for statistical modeling of bioprocess and its scale-up as well as for development of fully automated methods for biomedical cell microscopy image segmentation beginning from image pre-processing and initial segmentation to clump splitting and image post-processing with the goal to facilitate the high-throughput analysis. In order to highlight the contribution of this work, we present three application case studies where we applied the developed methods to solve the problems of cell image segmentation and bioprocess modeling and scale-up

    From representation learning to thematic classification - Application to hierarchical analysis of hyperspectral images

    Get PDF
    Numerous frameworks have been developed in order to analyze the increasing amount of available image data. Among those methods, supervised classification has received considerable attention leading to the development of state-of-the-art classification methods. These methods aim at inferring the class of each observation given a specific class nomenclature by exploiting a set of labeled observations. Thanks to extensive research efforts of the community, classification methods have become very efficient. Nevertheless, the results of a classification remains a highlevel interpretation of the scene since it only gives a single class to summarize all information in a given pixel. Contrary to classification methods, representation learning methods are model-based approaches designed especially to handle high-dimensional data and extract meaningful latent variables. By using physic-based models, these methods allow the user to extract very meaningful variables and get a very detailed interpretation of the considered image. The main objective of this thesis is to develop a unified framework for classification and representation learning. These two methods provide complementary approaches allowing to address the problem using a hierarchical modeling approach. The representation learning approach is used to build a low-level model of the data whereas classification is used to incorporate supervised information and may be seen as a high-level interpretation of the data. Two different paradigms, namely Bayesian models and optimization approaches, are explored to set up this hierarchical model. The proposed models are then tested in the specific context of hyperspectral imaging where the representation learning task is specified as a spectral unmixing proble
    • …
    corecore