391 research outputs found

    DIC image reconstruction using an energy minimization framework to visualize optical path length distribution

    Get PDF
    Label-free microscopy techniques have numerous advantages such as low phototoxicity, simple setup and no need for fluorophores or other contrast materials. Despite their advantages, most label-free techniques cannot visualize specific cellular compartments or the location of proteins and the image formation limits quantitative evaluation. Differential interference contrast (DIC) is a qualitative microscopy technique that shows the optical path length differences within a specimen. We propose a variational framework for DIC image reconstruction. The proposed method largely outperforms state-of-the-art methods on synthetic, artificial and real tests and turns DIC microscopy into an automated high-content imaging tool. Image sets and the source code of the examined algorithms are made publicly available.Peer reviewe

    Phase measurement using DIC microscopy

    Get PDF
    The development of fluorescent probes and proteins has helped make light microscopy more popular by allowing the visualization of specific subcellular components, location and dynamics of biomolecules. However, it is not always feasible to label the cells as it may be phototoxic or perturb their functionalities. Label-free microscopy techniques allow us to work with live cells without perturbation and to evaluate morphological differences, which in turn can provide useful information for high-throughput assays. In this study, we use one of the most popular label-free techniques called differential interference contrast (DIC) microscopy to estimate the phase of cells and other nearly transparent objects and instantly estimate their height. DIC images provide detailed information about the optical path length (OPL) differences in the sample and they are visually similar to a gradient image. Our previous DIC construction algorithm outputs an image where the values are proportional to the OPL (or implicitly the phase) of the sample. Although the reconstructed images are capable of describing cellular morphology and to a certain extent turn DIC into a quantitative technique, the actual OPL has to be computed from the input DIC image and the microscope calibration settings. Here we propose a computational method to measure the phase and approximate height of cells after microscope calibration, assuming a linear formation model. After a calibration step the phase of further samples can be determined when the refractive indices of the sample and the surrounding medium is known. The precision of the method is demonstrated on reconstructing the thickness of known objects and real cellular samples

    Advanced data analysis for traction force microscopy and data-driven discovery of physical equations

    Get PDF
    The plummeting cost of collecting and storing data and the increasingly available computational power in the last decade have led to the emergence of new data analysis approaches in various scientific fields. Frequently, the new statistical methodology is employed for analyzing data involving incomplete or unknown information. In this thesis, new statistical approaches are developed for improving the accuracy of traction force microscopy (TFM) and data-driven discovery of physical equations. TFM is a versatile method for the reconstruction of a spatial image of the traction forces exerted by cells on elastic gel substrates. The traction force field is calculated from a linear mechanical model connecting the measured substrate displacements with the sought-for cell-generated stresses in real or Fourier space, which is an inverse and ill-posed problem. This inverse problem is commonly solved making use of regularization methods. Here, we systematically test the performance of new regularization methods and Bayesian inference for quantifying the parameter uncertainty in TFM. We compare two classical schemes, L1- and L2-regularization with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. We find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization. We further combine the Bayesian L2 regularization with the computational speed of Fast Fourier Transform algorithms to develop a fully automated method for noise reduction and robust, standardized traction-force reconstruction that we call Bayesian Fourier transform traction cytometry (BFTTC). This method is made freely available as a software package with graphical user-interface for intuitive usage. Using synthetic data and experimental data, we show that these Bayesian methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Next, we employ our methodology developed for the solution of inverse problems for automated, data-driven discovery of ordinary differential equations (ODEs), partial differential equations (PDEs), and stochastic differential equations (SDEs). To find the equations governing a measured time-dependent process, we construct dictionaries of non-linear candidate equations. These candidate equations are evaluated using the measured data. With this approach, one can construct a likelihood function for the candidate equations. Optimization yields a linear, inverse problem which is to be solved under a sparsity constraint. We combine Bayesian compressive sensing using Laplace priors with automated thresholding to develop a new approach, namely automatic threshold sparse Bayesian learning (ATSBL). ATSBL is a robust method to identify ODEs, PDEs, and SDEs involving Gaussian noise, which is also referred to as type I noise. We extensively test the method with synthetic datasets describing physical processes. For SDEs, we combine data-driven inference using ATSBL with a novel entropy-based heuristic for discarding data points with high uncertainty. Finally, we develop an automatic iterative sampling optimization technique akin to Umbrella sampling. Therewith, we demonstrate that data-driven inference of SDEs can be substantially improved through feedback during the inference process if the stochastic process under investigation can be manipulated either experimentally or in simulations

    Optical flow analysis reveals that Kinesin-mediated advection impacts on the orientation of microtubules in the Drosophila oocyte.

    Get PDF
    The orientation of microtubule networks is exploited by motors to deliver cargoes to specific intracellular destinations, and is thus essential for cell polarity and function. Reconstituted in vitro systems have largely contributed to understanding the molecular framework regulating the behavior of microtubule filaments. In cells however, microtubules are exposed to various biomechanical forces that might impact on their orientation, but little is known about it. Oocytes, which display forceful cytoplasmic streaming, are excellent model systems to study the impact of motion forces on cytoskeletons in vivo. Here we implement variational optical flow analysis as a new approach to analyze the polarity of microtubules in the Drosophila oocyte, a cell that displays distinct Kinesin-dependent streaming. After validating the method as robust for describing microtubule orientation from confocal movies, we find that increasing the speed of flows results in aberrant plus end growth direction. Furthermore, we find that in oocytes where Kinesin is unable to induce cytoplasmic streaming, the growth direction of microtubule plus ends is also altered. These findings lead us to propose that cytoplasmic streaming - and thus motion by advection - contributes to the correct orientation of MTs in vivo. Finally, we propose a possible mechanism for a specialised cytoplasmic actin network (the actin mesh) to act as a regulator of flow speeds; to counteract the recruitment of Kinesin to microtubules. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text]

    Iterative X-ray Spectroscopic Ptychography

    Full text link
    Spectroscopic ptychography is a powerful technique to determine the chemical composition of a sample with high spatial resolution. In spectro-ptychography, a sample is rastered through a focused x-ray beam with varying photon energy so that a series of phaseless diffraction data are recorded. Each chemical component in the material under investigation has a characteristic absorption and phase contrast as a function of photon energy. Using a dictionary formed by the set of contrast functions of each energy for each chemical component, it is possible to obtain the chemical composition of the material from high resolution multi-spectral images. This paper presents SPA (Spectroscopic Ptychography with ADMM), a novel algorithm to iteratively solve the spectroscopic blind ptychography problem. We design first a nonlinear spectro-ptychography model based on Poisson maximum likelihood, and construct then the proposed method based on fast iterative splitting operators. SPA can be used to retrieve spectral contrast when considering both a known or an incomplete (partially known) dictionary of reference spectra. By coupling the redundancy across different spectral measurements, the proposed algorithm can achieve higher reconstruction quality when compared to standard state-of-the-art two-step methods. We demonstrate how SPA can recover accurate chemical maps from Poisson-noised measurements, and also show its enhanced robustness when reconstructing reduced redundancy ptychography data using large scanning stepsizes

    Unsupervised methods for large-scale, cell-resolution neural data analysis

    Get PDF
    In order to keep up with the volume of data, as well as the complexity of experiments and models in modern neuroscience, we need scalable and principled analytic programmes that take into account the scientific goals and the challenges of biological experiments. This work focuses on algorithms that tackle problems throughout the whole data analysis process. I first investigate how to best transform two-photon calcium imaging microscopy recordings – sets of contiguous images – into an easier-to-analyse matrix containing time courses of individual neurons. For this I first estimate how the true fluorescence signal gets transformed by tissue artefacts and the microscope setup, by learning the parameters of a realistic physical model from recorded data. Next, I describe how individual neural cell bodies may be segmented from the images, based on a cost function tailored to neural characteristics. Finally, I describe an interpretable non-linear dynamical model of neural population activity, which provides immediate scientific insight into complex system behaviour, and may spawn a new way of investigating stochastic non-linear dynamical systems. I hope the algorithms described here will not only be integrated into analytic pipelines of neural recordings, but also point out that algorithmic design should be informed by communication with the broader community, understanding and tackling the challenges inherent in experimental biological science

    Variable metric line-search based methods for nonconvex optimization

    Get PDF
    L'obiettivo di questa tesi è quello di proporre nuovi metodi iterativi del prim'ordine per un'ampia classe di problemi di ottimizzazione non convessa, in cui la funzione obiettivo è data dalla somma di un termine differenziabile, eventualmente non convesso, e di uno convesso, eventualmente non differenziabile. Tali problemi sono frequenti in applicazioni scientifiche quali l'elaborazione numerica di immagini e segnali, in cui il primo termine gioca il ruolo di funzione di discrepanza tra il dato osservato e l'oggetto ricostruito, mentre il secondo è il termine di regolarizzazione, volto ad imporre alcune specifiche proprietà sull'oggetto desiderato. Il nostro approccio è duplice: da un lato, i metodi proposti vengono accelerati facendo uso di strategie adattive di selezione dei parametri coinvolti; dall'altro lato, la convergenza di tali metodi viene garantita imponendo, ad ogni iterazione, un'opportuna condizione di sufficiente decrescita della funzione obiettivo. Il nostro primo contributo consiste nella messa a punto di un nuovo metodo di tipo proximal-gradient, che alterna un passo del gradiente sulla parte differenziabile ad uno proximal sulla parte convessa, denominato Variable Metric Inexact Line-search based Algorithm (VMILA). Tale metodo è innovativo da più punti di vista. Innanzitutto, a differenza della maggior parte dei metodi proximal-gradient, VMILA permette di adottare una metrica variabile nel calcolo dell'operatore proximal con estrema libertà di scelta, imponendo soltanto che i parametri coinvolti appartengano a sottoinsiemi limitati degli spazi in cui vengono definiti. In secondo luogo, in VMILA il calcolo del punto proximal viene effettuato tramite un preciso criterio di inesattezza, che può essere concretamente implementato in alcuni casi di interesse. Questo aspetto assume una rilevante importanza ogni qualvolta l'operatore proximal non sia calcolabile in forma chiusa. Infine, le iterate di VMILA sono calcolate tramite una ricerca di linea inesatta lungo la direzione ammissibile e secondo una specifica condizione di sufficiente decrescita di tipo Armijo. Il secondo contributo di questa tesi è proposto in un caso particolare del problema di ottimizzazione precedentemente considerato, in cui si assume che il termine convesso sia dato dalla somma di un numero finito di funzioni indicatrici di insiemi chiusi e convessi. In altre parole, si considera il problema di minimizzare una funzione differenziabile in cui i vincoli sulle incognite hanno una struttura separabile. In letteratura, il metodo classico per affrontare tale problema è senza dubbio il metodo di Gauss-Seidel (GS) non lineare, dove la minimizzazione della funzione obiettivo è ciclicamente alternata su ciascun blocco di variabili del problema. In questa tesi, viene proposta una versione inesatta dello schema GS, denominata Cyclic Block Generalized Gradient Projection (CBGGP) method, in cui la minimizzazione parziale su ciascun blocco di variabili è realizzata mediante un numero finito di passi del metodo del gradiente proiettato. La novità nell'approccio proposto consiste nell'introduzione di metriche non euclidee nel calcolo del gradiente proiettato. Per entrambi i metodi si dimostra, senza alcuna ipotesi di convessità sulla funzione obiettivo, che ciascun punto di accumulazione della successione delle iterate è stazionario. Nel caso di VMILA, è invece possibile dimostrare la convergenza forte delle iterate ad un punto stazionario quando la funzione obiettivo soddisfa la disuguaglianza di Kurdyka-Lojasiewicz. Numerosi test numerici in problemi di elaborazione di immagini, quali la ricostruzione di immagini sfocate e rumorose, la compressione di immagini, la stima di fase in microscopia e la deconvoluzione cieca di immagini in astronomia, danno prova della flessibilità ed efficacia dei metodi proposti.The aim of this thesis is to propose novel iterative first order methods tailored for a wide class of nonconvex nondifferentiable optimization problems, in which the objective function is given by the sum of a differentiable, possibly nonconvex function and a convex, possibly nondifferentiable term. Such problems have become ubiquitous in scientific applications such as image or signal processing, where the first term plays the role of the fit-to-data term, describing the relation between the desired object and the measured data, whereas the second one is the penalty term, aimed at restricting the search of the object itself to those satisfying specific properties. Our approach is twofold: on one hand, we accelerate the proposed methods by making use of suitable adaptive strategies to choose the involved parameters; on the other hand, we ensure convergence by imposing a sufficient decrease condition on the objective function at each iteration. Our first contribution is the development of a novel proximal--gradient method denominated Variable Metric Inexact Line-search based Algorithm (VMILA). The proposed approach is innovative from several points of view. First of all, VMILA allows to adopt a variable metric in the computation of the proximal point with a relative freedom of choice. Indeed the only assumption that we make is that the parameters involved belong to bounded sets. This is unusual with respect to the state-of-the-art proximal-gradient methods, where the parameters are usually chosen by means of a fixed rule or tightly related to the Lipschitz constant of the problem. Second, we introduce an inexactness criterion for computing the proximal point which can be practically implemented in some cases of interest. This aspect assumes a relevant importance whenever the proximal operator is not available in a closed form, which is often the case. Third, the VMILA iterates are computed by performing a line-search along the feasible direction and according to a specific Armijo-like condition, which can be considered as an extension of the classical Armijo rule proposed in the context of differentiable optimization. The second contribution is given for a special instance of the previously considered optimization problem, where the convex term is assumed to be a finite sum of the indicator functions of closed, convex sets. In other words, we consider a problem of constrained differentiable optimization in which the constraints have a separable structure. The most suited method to deal with this problem is undoubtedly the nonlinear Gauss-Seidel (GS) or block coordinate descent method, where the minimization of the objective function is cyclically alternated on each block of variables of the problem. In this thesis, we propose an inexact version of the GS scheme, denominated Cyclic Block Generalized Gradient Projection (CBGGP) method, in which the partial minimization over each block of variables is performed inexactly by means of a fixed number of gradient projection steps. The novelty of the proposed approach consists in the introduction of non Euclidean metrics in the computation of the gradient projection. As for VMILA, the sufficient decrease of the function is imposed by means of a block version of the Armijo line-search. For both methods, we prove that each limit point of the sequence of iterates is stationary, without any convexity assumptions. In the case of VMILA, strong convergence of the iterates to a stationary point is also proved when the objective function satisfies the Kurdyka-Lojasiewicz property. Extensive numerical experience in image processing applications, such as image deblurring and denoising in presence of non-Gaussian noise, image compression, phase estimation and image blind deconvolution, shows the flexibility of our methods in addressing different nonconvex problems, as well as their ability to effectively accelerate the progress towards the solution of the treated problem
    corecore