503 research outputs found

    Visual Quality Enhancement in Optoacoustic Tomography using Active Contour Segmentation Priors

    Full text link
    Segmentation of biomedical images is essential for studying and characterizing anatomical structures, detection and evaluation of pathological tissues. Segmentation has been further shown to enhance the reconstruction performance in many tomographic imaging modalities by accounting for heterogeneities of the excitation field and tissue properties in the imaged region. This is particularly relevant in optoacoustic tomography, where discontinuities in the optical and acoustic tissue properties, if not properly accounted for, may result in deterioration of the imaging performance. Efficient segmentation of optoacoustic images is often hampered by the relatively low intrinsic contrast of large anatomical structures, which is further impaired by the limited angular coverage of some commonly employed tomographic imaging configurations. Herein, we analyze the performance of active contour models for boundary segmentation in cross-sectional optoacoustic tomography. The segmented mask is employed to construct a two compartment model for the acoustic and optical parameters of the imaged tissues, which is subsequently used to improve accuracy of the image reconstruction routines. The performance of the suggested segmentation and modeling approach are showcased in tissue-mimicking phantoms and small animal imaging experiments.Comment: Accepted for publication in IEEE Transactions on Medical Imagin

    Measuring cellular traction forces on non-planar substrates

    Full text link
    Animal cells use traction forces to sense the mechanics and geometry of their environment. Measuring these traction forces requires a workflow combining cell experiments, image processing and force reconstruction based on elasticity theory. Such procedures have been established before mainly for planar substrates, in which case one can use the Green's function formalism. Here we introduce a worksflow to measure traction forces of cardiac myofibroblasts on non-planar elastic substrates. Soft elastic substrates with a wave-like topology were micromolded from polydimethylsiloxane (PDMS) and fluorescent marker beads were distributed homogeneously in the substrate. Using feature vector based tracking of these marker beads, we first constructed a hexahedral mesh for the substrate. We then solved the direct elastic boundary volume problem on this mesh using the finite element method (FEM). Using data simulations, we show that the traction forces can be reconstructed from the substrate deformations by solving the corresponding inverse problem with a L1-norm for the residue and a L2-norm for 0th order Tikhonov regularization. Applying this procedure to the experimental data, we find that cardiac myofibroblast cells tend to align both their shapes and their forces with the long axis of the deformable wavy substrate.Comment: 34 pages, 9 figure

    Pigment Melanin: Pattern for Iris Recognition

    Full text link
    Recognition of iris based on Visible Light (VL) imaging is a difficult problem because of the light reflection from the cornea. Nonetheless, pigment melanin provides a rich feature source in VL, unavailable in Near-Infrared (NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical not stimulated in NIR. In this case, a plausible solution to observe such patterns may be provided by an adaptive procedure using a variational technique on the image histogram. To describe the patterns, a shape analysis method is used to derive feature-code for each subject. An important question is how much the melanin patterns, extracted from VL, are independent of iris texture in NIR. With this question in mind, the present investigation proposes fusion of features extracted from NIR and VL to boost the recognition performance. We have collected our own database (UTIRIS) consisting of both NIR and VL images of 158 eyes of 79 individuals. This investigation demonstrates that the proposed algorithm is highly sensitive to the patterns of cromophores and improves the iris recognition rate.Comment: To be Published on Special Issue on Biometrics, IEEE Transaction on Instruments and Measurements, Volume 59, Issue number 4, April 201

    Magnetic Doppler imaging of alpha^2 Canum Venaticorum in all four Stokes parameters. Unveiling the hidden complexity of stellar magnetic fields

    Full text link
    Strong organized magnetic fields have been studied in the upper main sequence chemically peculiar stars for more than half a century. However, only recently have observational methods and numerical techniques become sufficiently mature to allow us to record and interpret high-resolution four Stokes parameter spectra, leading to the first assumption-free magnetic field models of these stars. Here we present a detailed magnetic Doppler imaging analysis of the spectropolarimetric observations of the prototypical magnetic Ap star alpha^2 CVn. The surface abundance distributions of Fe and Cr and a full vector map of the stellar magnetic field are reconstructed in a self-consistent inversion using our state-of-the-art magnetic Doppler imaging code Invers10. We succeeded in reproducing most of the details of the available spectropolarimetric observations of alpha^2 CVn with a magnetic map which combines a global dipolar-like field topology with localized spots of higher field intensity. We demonstrate that these small-scale magnetic structures are inevitably required to fit the linear polarization spectra; however, their presence cannot be inferred from the Stokes I and V observations alone. Our magnetic Doppler imaging analysis of alpha^2 CVn and previous results for 53 Cam support the view that the upper main sequence stars can harbour fairly complex surface magnetic fields which resemble oblique dipoles only at the largest spatial scales. Spectra in all four Stokes parameters are absolutely essential to unveil and meaningfully characterize this field complexity in Ap stars. We therefore suggest that understanding magnetism of stars in other parts of the H-R diagram is similarly incomplete without investigation of their linear polarization spectra.Comment: 16 pages, 12 figures; Accepted for publication by Astronomy & Astrophysic

    Characterization of vertical cracks using lock-in vibrothermography

    Get PDF
    214 p.Esta tesis se centra en la aplicación de la vibrotermografía lock-in para la detección y caracterización dedefectos verticales sumergidos. En esta técnica, la pieza se excita mediante ultrasonidos, que generancalor en los defectos por fricción o deformación plástica. Este calor se difunde por el material y susefectos se pueden detectar midiendo la temperatura superficial mediante una cámara infrarroja. Con el finde caracterizar defectos es necesario resolver el problema inverso, que consiste en recuperar la geometríade las fuentes de calor a partir de la distribución de temperatura superficial medida. Éste es un problemamal puesto, ya que su solución es fuertemente dependiente de pequeños errores en los datos y la inversiónes inestable. Se ha implementado un algoritmo de inversión robusto, basado en minimización pormínimos cuadrados estabilizados mediante términos de penalización basados en los funcionales deTikhonov, Total Variation y L1, capaz de reconstruir distribuciones de fuentes de calor partiendo de datosde vibrotermografía. El algoritmo se ha analizado con datos sintéticos y se ha optimizado con el fin deextender su aplicación a la caracterización del mayor rango de geometrías de fuentes de calor posible.Los resultados obtenidos se han verificado con datos experimentales obtenidos en ensayos devibrotermografía lock-in, utilizando muestras con fuentes de calor verticales calibradas. Finalmente, se hahecho uso del algoritmo de inversión para caracterizar grietas reales en una muestra soldada de Inconel718 y los resultados están en buena correlación cualitativa con los resultados del ensayo de líquidospenetrantes realizado posteriormente

    Doctor of Philosophy

    Get PDF
    dissertationInverse Electrocardiography (ECG) aims to noninvasively estimate the electrophysiological activity of the heart from the voltages measured at the body surface, with promising clinical applications in diagnosis and therapy. The main challenge of this emerging technique lies in its mathematical foundation: an inverse source problem governed by partial differential equations (PDEs) which is severely ill-conditioned. Essential to the success of inverse ECG are computational methods that reliably achieve accurate inverse solutions while harnessing the ever-growing complexity and realism of the bioelectric simulation. This dissertation focuses on the formulation, optimization, and solution of the inverse ECG problem based on finite element methods, consisting of two research thrusts. The first thrust explores the optimal finite element discretization specifically oriented towards the inverse ECG problem. In contrast, most existing discretization strategies are designed for forward problems and may become inappropriate for the corresponding inverse problems. Based on a Fourier analysis of how discretization relates to ill-conditioning, this work proposes refinement strategies that optimize approximation accuracy o f the inverse ECG problem while mitigating its ill-conditioning. To fulfill these strategies, two refinement techniques are developed: one uses hybrid-shaped finite elements whereas the other adapts high-order finite elements. The second research thrust involves a new methodology for inverse ECG solutions called PDE-constrained optimization, an optimization framework that flexibly allows convex objectives and various physically-based constraints. This work features three contributions: (1) fulfilling optimization in the continuous space, (2) formulating rigorous finite element solutions, and (3) fulfilling subsequent numerical optimization by a primal-dual interiorpoint method tailored to the given optimization problem's specific algebraic structure. The efficacy o f this new method is shown by its application to localization o f cardiac ischemic disease, in which the method, under realistic settings, achieves promising solutions to a previously intractable inverse ECG problem involving the bidomain heart model. In summary, this dissertation advances the computational research of inverse ECG, making it evolve toward an image-based, patient-specific modality for biomedical research

    Reconstruction and Simulation of Cellular Traction Forces

    Get PDF
    Biological cells are able to sense the stiffness, geometry and topography of their environment and sensitively respond to it. For this purpose, they actively apply contractile forces to the extracellular space, which can be determined by traction force microscopy. Thereby cells are cultured on elastically deformable substrates and cellular traction patterns are quanti- tatively reconstructed from measured substrate deformations, by solving the inverse elastic problem. In this thesis we investigate the influence of environmental topography to cellular force generation and the distribution of intracellular tension. For this purpose, we reconstruct traction forces on wavy elastic substrates, using a novel technique based on finite element methods. In order to relate forces to single cell-matrix contacts and different structures of the cytoskeleton, we then introduce another novel variant of traction force microscopy, which introduces cell contraction modeling into the process of cellular traction reconstruction. This approach is robust against experimental noise and does not need regularisation. We apply this method to experimental data to demonstrate that different types of actin fibers in the cell statistically show different contractilities. We complete our investigation by simulation studies considering cell colonies and single cells as thermoelastically contracting continuum coupled to an elastic substrate. In particular we examined the effect of geometry on cellular behavior in collective cell migration and tissue invasion during tumor metastasis

    Uncertainty Quantification and Reduction in Cardiac Electrophysiological Imaging

    Get PDF
    Cardiac electrophysiological (EP) imaging involves solving an inverse problem that infers cardiac electrical activity from body-surface electrocardiography data on a physical domain defined by the body torso. To avoid unreasonable solutions that may fit the data, this inference is often guided by data-independent prior assumptions about different properties of cardiac electrical sources as well as the physical domain. However, these prior assumptions may involve errors and uncertainties that could affect the inference accuracy. For example, common prior assumptions on the source properties, such as fixed spatial and/or temporal smoothness or sparseness assumptions, may not necessarily match the true source property at different conditions, leading to uncertainties in the inference. Furthermore, prior assumptions on the physical domain, such as the anatomy and tissue conductivity of different organs in the thorax model, represent an approximation of the physical domain, introducing errors to the inference. To determine the robustness of the EP imaging systems for future clinical practice, it is important to identify these errors/uncertainties and assess their impact on the solution. This dissertation focuses on the quantification and reduction of the impact of uncertainties caused by prior assumptions/models on cardiac source properties as well as anatomical modeling uncertainties on the EP imaging solution. To assess the effect of fixed prior assumptions/models about cardiac source properties on the solution of EP imaging, we propose a novel yet simple Lp-norm regularization method for volumetric cardiac EP imaging. This study reports the necessity of an adaptive prior model (rather than fixed model) for constraining the complex spatiotemporally changing properties of the cardiac sources. We then propose a multiple-model Bayesian approach to cardiac EP imaging that employs a continuous combination of prior models, each re-effecting a specific spatial property for volumetric sources. The 3D source estimation is then obtained as a weighted combination of solutions across all models. Including a continuous combination of prior models, our proposed method reduces the chance of mismatch between prior models and true source properties, which in turn enhances the robustness of the EP imaging solution. To quantify the impact of anatomical modeling uncertainties on the EP imaging solution, we propose a systematic statistical framework. Founded based on statistical shape modeling and unscented transform, our method quantifies anatomical modeling uncertainties and establish their relation to the EP imaging solution. Applied on anatomical models generated from different image resolutions and different segmentations, it reports the robustness of EP imaging solution to these anatomical shape-detail variations. We then propose a simplified anatomical model for the heart that only incorporates certain subject-specific anatomical parameters, while discarding local shape details. Exploiting less resources and processing for successful EP imaging, this simplified model provides a simple clinically-compatible anatomical modeling experience for EP imaging systems. Different components of our proposed methods are validated through a comprehensive set of synthetic and real-data experiments, including various typical pathological conditions and/or diagnostic procedures, such as myocardial infarction and pacing. Overall, the methods presented in this dissertation for the quantification and reduction of uncertainties in cardiac EP imaging enhance the robustness of EP imaging, helping to close the gap between EP imaging in research and its clinical application

    A proximal iteration for deconvolving Poisson noisy images using sparse representations

    Get PDF
    We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. The image to restore is assumed to be sparsely represented in a dictionary of waveforms such as the wavelet or curvelet transforms. Our key contributions are: First, we handle the Poisson noise properly by using the Anscombe variance stabilizing transform leading to a {\it non-linear} degradation equation with additive Gaussian noise. Second, the deconvolution problem is formulated as the minimization of a convex functional with a data-fidelity term reflecting the noise properties, and a non-smooth sparsity-promoting penalties over the image representation coefficients (e.g. â„“1\ell_1-norm). Third, a fast iterative backward-forward splitting algorithm is proposed to solve the minimization problem. We derive existence and uniqueness conditions of the solution, and establish convergence of the iterative algorithm. Finally, a GCV-based model selection procedure is proposed to objectively select the regularization parameter. Experimental results are carried out to show the striking benefits gained from taking into account the Poisson statistics of the noise. These results also suggest that using sparse-domain regularization may be tractable in many deconvolution applications with Poisson noise such as astronomy and microscopy
    • …
    corecore