345 research outputs found

    Traction force microscopy on soft elastic substrates: a guide to recent computational advances

    Get PDF
    The measurement of cellular traction forces on soft elastic substrates has become a standard tool for many labs working on mechanobiology. Here we review the basic principles and different variants of this approach. In general, the extraction of the substrate displacement field from image data and the reconstruction procedure for the forces are closely linked to each other and limited by the presence of experimental noise. We discuss different strategies to reconstruct cellular forces as they follow from the foundations of elasticity theory, including two- versus three-dimensional, inverse versus direct and linear versus non-linear approaches. We also discuss how biophysical models can improve force reconstruction and comment on practical issues like substrate preparation, image processing and the availability of software for traction force microscopy.Comment: Revtex, 29 pages, 3 PDF figures, 2 tables. BBA - Molecular Cell Research, online since 27 May 2015, special issue on mechanobiolog

    Bayesian field theoretic reconstruction of bond potential and bond mobility in single molecule force spectroscopy

    Get PDF
    Quantifying the forces between and within macromolecules is a necessary first step in understanding the mechanics of molecular structure, protein folding, and enzyme function and performance. In such macromolecular settings, dynamic single-molecule force spectroscopy (DFS) has been used to distort bonds. The resulting responses, in the form of rupture forces, work applied, and trajectories of displacements, have been used to reconstruct bond potentials. Such approaches often rely on simple parameterizations of one-dimensional bond potentials, assumptions on equilibrium starting states, and/or large amounts of trajectory data. Parametric approaches typically fail at inferring complex-shaped bond potentials with multiple minima, while piecewise estimation may not guarantee smooth results with the appropriate behavior at large distances. Existing techniques, particularly those based on work theorems, also do not address spatial variations in the diffusivity that may arise from spatially inhomogeneous coupling to other degrees of freedom in the macromolecule, thereby presenting an incomplete picture of the overall bond dynamics. To solve these challenges, we have developed a comprehensive empirical Bayesian approach that incorporates data and regularization terms directly into a path integral. All experiemental and statistical parameters in our method are estimated empirically directly from the data. Upon testing our method on simulated data, our regularized approach requires fewer data and allows simultaneous inference of both complex bond potentials and diffusivity profiles.Comment: In review - Python source code available on github. Abridged abstract on arXi

    Techniques basées sur des modèles et apprentissage machine pour la reconstruction d’image non-linéaire en tomographie optique diffuse

    Get PDF
    La tomographie optique diffuse (TOD) est une modalité d’imagerie biomédicale 3D peu dispendieuse et non-invasive qui permet de reconstruire les propriétés optiques d’un tissu biologique. Le processus de reconstruction d’images en TOD est difficile à réaliser puisqu’il nécessite de résoudre un problème non-linéaire et mal posé. Les propriétés optiques sont calculées à partir des mesures de surface du milieu à l’étude. Dans ce projet, deux méthodes de reconstruction non-linéaire pour la TOD ont été développées. La première méthode utilise un modèle itératif, une approche encore en développement qu’on retrouve dans la littérature. L’approximation de la diffusion est le modèle utilisé pour résoudre le problème direct. Par ailleurs, la reconstruction d’image à été réalisée dans différents régimes, continu et temporel, avec des mesures intrinsèques et de fluorescence. Dans un premier temps, un algorithme de reconstruction en régime continu et utilisant des mesures multispectrales est développé pour reconstruire la concentration des chromophores qui se trouve dans différents types de tissus. Dans un second temps, un algorithme de reconstruction est développé pour calculer le temps de vie de différents marqueurs fluorescents à partir de mesures optiques dans le domaine temporel. Une approche innovatrice a été d’utiliser la totalité de l’information du signal temporel dans le but d’améliorer la reconstruction d’image. Par ailleurs, cet algorithme permettrait de distinguer plus de trois temps de vie, ce qui n’a pas encore été démontré en imagerie de fluorescence. La deuxième méthode qui a été développée utilise l’apprentissage machine et plus spécifiquement l’apprentissage profond. Un modèle d’apprentissage profond génératif est mis en place pour reconstruire la distribution de sources d’émissions de fluorescence à partir de mesures en régime continu. Il s’agit de la première utilisation d’un algorithme d’apprentissage profond appliqué à la reconstruction d’images en TOD de fluorescence. La validation de la méthode est réalisée avec une mire aux propriétés optiques connues dans laquelle sont inséres des marqueurs fluorescents. La robustesse de cette méthode est démontrée même dans les situations où le nombre de mesures est limité et en présence de bruit.Abstract : Diffuse optical tomography (DOT) is a low cost and noninvasive 3D biomedical imaging technique to reconstruct the optical properties of biological tissues. Image reconstruction in DOT is inherently a difficult problem, because the inversion process is nonlinear and ill-posed. During DOT image reconstruction, the optical properties of the medium are recovered from the boundary measurements at the surface of the medium. In this work, two approaches are proposed for non-linear DOT image reconstruction. The first approach relies on the use of iterative model-based image reconstruction, which is still under development for DOT and that can be found in the literature. A 3D forward model is developed based on the diffusion equation, which is an approximation of the radiative transfer equation. The forward model developed can simulate light propagation in complex geometries. Additionally, the forward model is developed to deal with different types of optical data such as continuous-wave (CW) and time-domain (TD) data for both intrinsic and fluorescence signals. First, a multispectral image reconstruction algorithm is developed to reconstruct the concentration of different tissue chromophores simultaneously from a set of CW measurements at different wavelengths. A second image reconstruction algorithm is developed to reconstruct the fluorescence lifetime (FLT) of different fluorescent markers from time-domain fluorescence measurements. In this algorithm, all the information contained in full temporal curves is used along with an acceleration technique to render the algorithm of practical use. Moreover, the proposed algorithm has the potential of being able to distinguish more than 3 FLTs, which is a first in fluorescence imaging. The second approach is based on machine learning techniques, in particular deep learning models. A deep generative model is proposed to reconstruct the fluorescence distribution map from CW fluorescence measurements. It is the first time that such a model is applied for fluorescence DOT image reconstruction. The performance of the proposed algorithm is validated with an optical phantom and a fluorescent marker. The proposed algorithm recovers the fluorescence distribution even from very noisy and sparse measurements, which is a big limitation in fluorescence DOT imaging

    Molecular Imaging

    Get PDF
    The present book gives an exceptional overview of molecular imaging. Practical approach represents the red thread through the whole book, covering at the same time detailed background information that goes very deep into molecular as well as cellular level. Ideas how molecular imaging will develop in the near future present a special delicacy. This should be of special interest as the contributors are members of leading research groups from all over the world

    Using state-of-the-art inverse problem techniques to develop reconstruction methods for fluorescence diffuse optical

    Get PDF
    An inverse problem is a mathematical framework that is used to obtain info about a physical object or system from observed measurements. It usually appears when we wish to obtain information about internal data from outside measurements and has many applications in science and technology such as medical imaging, geophysical imaging, image deblurring, image inpainting, electromagnetic scattering, acoustics, machine learning, mathematical finance, physics, etc. The main goal of this PhD thesis was to use state-of-the-art inverse problem techniques to develop modern reconstruction methods for solving the fluorescence diffuse optical tomography (fDOT) problem. fDOT is a molecular imaging technique that enables the quantification of tomographic (3D) bio-distributions of fluorescent tracers in small animals. One of the main difficulties in fDOT is that the high absorption and scattering properties of biological tissues lead to an ill-posed inverse problem, yielding multiple nonunique and unstable solutions to the reconstruction problem. Thus, the problem requires regularization to achieve a stable solution. The so called “non-contact fDOT scanners” are based on using CCDs as virtual detectors instead of optic fibers in contact with the sample. These non-contact systems generate huge datasets that lead to computationally demanding inverse problem. Therefore, techniques to minimize the size of the acquired datasets without losing image performance are highly advisable. The first part of this thesis addresses the optimization of experimental setups to reduce the dataset size, by using l₂–based regularization techniques. The second part, based on the success of l₁ regularization techniques for denoising and image reconstruction, is devoted to advanced regularization problem using l₁–based techniques, and the last part introduces compressed sensing (CS) theory, which enables further reduction of the acquired dataset size. The main contributions of this thesis are: 1) A feasibility study (the first one for fDOT to our knowledge) of the automatic Ucurve method to select the regularization parameter (l₂–norm). The U-curve method has shown to be an excellent automatic method to deal with large datasets because it reduces the regularization parameter search to a suitable interval. 2) Once we found an automatic method to choose the l₂ regularization parameter for fDOT, singular value analysis (SVA) of fDOT forward matrix was used to maximize the information content in acquired measurements and minimize the computational cost. It was shown for the first time that large meshes can be reduced in the z direction, without any loss in imaging performance but reducing computational times and memory requirements. 3) Dealing with l₁–based regularization techniques, we presented a novel iterative algorithm, ART-SB, that combines the advantage of Algebraic reconstruction method (ART) in handling large datasets with Split Bregman (SB) denoising, an approach which has been shown to be optimum for Total Variation (TV) denoising. SB has been implemented in a cost-efficient way to handle large datasets. This makes ART-SB more computationally efficient than previous TV-based reconstruction algorithms and most splitting approaches. 4) Finally, we proposed a novel approach to CS for fDOT, named the SB-SVA iterative method. This approach is based on the analysis-based co-sparse representation model, where an analysis operator multiplies the image transforming it in a sparse one. Taking advantage of the CS-SB algorithm, we restrict the solution reached at each CS-SB iteration to a certain space where the singular values of the forward matrix and the sparsity structure combine in beneficial manner. In this way, SB-SVA forces indirectly the wellconditioninig of the forward matrix while designing (learning) the analysis operator and finding the solution. Furthermore, SB-SVA outperforms the CS-SB algorithm in terms of image quality and needs fewer acquisition parameters. The approaches presented here have been validated with experimental. -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------El problema inverso consiste en un conjunto de técnicas matemáticas para obtener información sobre un fenómeno físico a partir de una serie de observaciones, medidas o datos. Dicho problema aparece en muchas aplicaciones científicas y tecnológicas como pueden ser imagen médica, imagen geofísica, acústica, aprendizaje máquina, física, etc. El principal objetivo de esta tesis doctoral fue utilizar la teoría del problema inverso para desarrollar nuevos métodos de reconstrucción para el problema de tomografía óptica difusiva por fluorescencia (fDOT), también llamada tomografía molecular de fluorescencia (FMT). fDOT es una modalidad de imagen médica que permite obtener de manera noinvasiva la distribución espacial 3D de la concentración de sondas moleculares fluorescentes en animales pequeños in-vivo. Una de las dificultades principales del problema inverso en fDOT, es que, debido a la alta difusión y absorción de los tejidos biológicos, es un problema fuertemente mal condicionado. Su solución no es única y presenta fuertes inestabilidades, por lo que el problema debe ser regularizado para obtener una solución estable. Los llamados escáneres fDOT “sin contacto” se basan en utilizar cámaras CCD como detectores virtuales, en vez de fibras ópticas en contacto con la muestras. Estos sistemas, necesitan un volumen de datos muy elevado para obtener una buena calidad de imagen y el coste computacional de hallar la solución llega a ser muy grande. Por esta razón, es importante optimizar el sistema, es decir, maximizar la información contenida en los datos adquiridos a la vez que minimizamos el coste computacional. La primera parte de esta tesis se centra en optimizar el sistema de adquisición, reduciendo el volumen de datos necesario usando técnicas de regularización basadas en la norma l₂. La segunda parte se inspira en el gran éxito de las técnicas de regularización basadas en la norma l₁ para la reconstrucción de imagen, y se centra en regularizar el problema fDOT mediante dichas técnicas. El trabajo finaliza introduciendo la técnica de “compressed sensing” (CS), que permite también reducir el número de datos necesarios sin por ello perder calidad de imagen. Las contribuciones principales de esta tesis son: 1) Realización de un estudio de viabilidad, por primera vez en fDOT, del método automático U-curva para seleccionar el parámetro de regularización (norma l₂). U-curva mostró ser un método óptimo para problemas con un volumen elevado de datos, ya que dicho método ofrece un intervalo donde encontrar el parámetro de regularización. 2) Una vez encontrado el método automático de selección de parámetro de regularización se realizó un estudio de la matriz del sistema de fDOT basado en el análisis de valores singulares (SVA), con la finalidad de maximizar la información contenida en los datos adquiridos y minimizar el coste computacional. Por primera vez se demostró que el uso de un mallado con menor densidad en la dirección perpendicular al plano obtiene mejores resultados que el uso convencional de una distribución isotrópica del mismo. 3) En la segunda parte de esta tesis, usando técnicas de regularización basadas en la norma l₁, se presenta un nuevo algoritmo iterativo, ART-SB, que combina la capacidad de la técnica de reconstrucción algebraica (ART) para lidiar con problemas con muchos datos con la efectividad del método Split Bregman (SB) para reducir ruido en la imagen mediante su variación total (TV). SB fue implementado de forma eficiente para procesar un elevado volumen de datos, de manera que ART-SB es computacionalmente más eficiente que otros algoritmos de reconstrucción presentados previamente en la literatura, basados en la TV de la imagen y que la mayoría de las técnicas llamadas de “splitting”. 4) Finalmente, proponemos una nueva aproximación iterativa a CS para fDOT, llamada SB-SVA. Esta aproximación se basa en el llamado modelo analítico co-disperso (co-sparse), donde un operador analítico multiplica la imagen convirtiéndola en una imagen dispersa. Este método aprovecha el método SB para CS (CS-SB) para restringir la solución alcanzada en cada iteración a un espacio determinado, donde los valores singulares de la matriz del sistema y la dispersión (“sparsity”) de la solución en dicha iteración combinen beneficiosamente; es decir, donde valores singulares muy pequeños no estén asociados a valores distintos de cero de la solución “sparse”. SB-SVA mejora el mal condicionamiento de la matriz del sistema a la vez que diseña el operador apropiado a través del cual la imagen se puede representar de forma dispersa y soluciona el problema de CS. Además, SB-SVA mostró mejores resultados que CS-SB en cuanto a calidad de imagen, requiriendo menor número de parámetros de adquisición. Todas las aproximaciones que presentamos en esta tesis fueron validadas con datos experimentales

    Accelerated High-Resolution Photoacoustic Tomography via Compressed Sensing

    Get PDF
    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue. A particular example is the planar Fabry-Perot (FP) scanner, which yields high-resolution images but takes several minutes to sequentially map the photoacoustic field on the sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: First, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP scanner and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in-vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction methods that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of PAT scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays
    corecore