20 research outputs found
Deep Learning Formulation of ECGI for Data-driven Integration of Spatiotemporal Correlations and Imaging Information
International audienceThe challenge of non-invasive Electrocardiographic Imaging (ECGI) is to recreate the electrical activity of the heart using body surface potentials. Specifically, there are numerical difficulties due to the ill-posed nature of the problem. We propose a novel method based on Conditional Variational Autoencoders using Deep generative Neural Networks to overcome this challenge. By conditioning the electrical activity on heart shape and electrical potentials, our model is able to generate activation maps with good accuracy on simulated data (mean square error, MSE = 0.095). This method differs from other formulations because it naturally takes into account spatio-temporal correlations as well as the imaging substrate through convolutions and conditioning. We believe these features can help improving ECGI results
FDOT reconstruction and setting optimization using singular value analysis with automatic thresholding
Comunicació presentada a: 2009 IEEE Nuclear Science Symposium Conference Record, celebrada a Orlando, Florida, Estats Units d'Amèrica, del 24 d'octubre a l'1 de novembre de 2009.Fluorescence Enhanced Diffuse Optical Tomography (FDOT) retrieves 3D distributions of fluorophore concentration in small animals, non-invasively and in vivo. The FDOT problem can be formulated as a system of equations, d=Wf, where W is a weight matrix that couples the measurements (d) to the unknown spatial distribution (f) of the fluorophore concentration (forward problem). The Singular Value Decomposition (SVD) of W has been previously employed to solve the inverse problem (image reconstruction) and to study the imaging performance of FDOT. To achieve good image quality it is necessary to determine the number of useful singular values to retain. We use an automatic method that analytically calculates a threshold to select the significant singular values for SVD reconstruction of FDOT experiments previously validity in our laboratory. Afterwards, this work appraises the effect of different settings of the acquisition parameters (distribution of mesh points, density of sources and detectors) of a parallel-plate non-contact FDOT, in order to achieve the best possible imaging performance, i.e., minimum number of singular values of W, maximum information content in acquired measurements and minimum computational cost. We conclude that the use of a mesh with lower density in the direction perpendicular to the plates achieves better performance than the usual isotropic mesh points distribution. Any increase in the number of mesh points, sources and detectors at distances shorter than the photon mean free path leads to slight improvements in image quality while increasing computational burden
Feasibility of U-curve method to select the regularization parameter for fluorescence diffuse optical tomography in phantom and small animal studies
When dealing with ill-posed problems such as fluorescence diffuse optical tomography (fDOT) the choice of the regularization parameter is extremely important for computing a reliable reconstruction. Several automatic methods for the selection of the regularization parameter have been introduced over the years and their performance depends on the particular inverse problem. Herein a U-curve-based algorithm for the selection of regularization parameter has been applied for the first time to fDOT. To increase the computational efficiency for large systems an interval of the regularization parameter is desirable. The U-curve provided a suitable selection of the regularization parameter in terms of Picard’s condition, image resolution and image noise. Results are shown both on phantom and mouse data.This work was supported in part by Fundación Caja Navarra (#12180), Ministerio de Ciencia
e Innovación (FPI program, TEC2008-06715 and TEC2007-64731), and EU-FP7 project
FMTXCT-201792
Feasibility of U-curve method to select the regularization parameter for fluorescence diffuse optical tomography in phantom and small animal studies
When dealing with ill-posed problems such as fluorescence diffuse optical tomography (fDOT) the choice of the regularization parameter is extremely important for computing a reliable reconstruction. Several automatic methods for the selection of the regularization parameter have been introduced over the years and their performance depends on the particular inverse problem. Herein a U-curve-based algorithm for the selection of regularization parameter has been applied for the first time to fDOT. To increase the computational efficiency for large systems an interval of the regularization parameter is desirable. The U-curve provided a suitable selection of the regularization parameter in terms of Picard’s condition, image resolution and image noise. Results are shown both on phantom and mouse data.This work was supported in part by Fundación Caja Navarra (#12180), Ministerio de Ciencia
e Innovación (FPI program, TEC2008-06715 and TEC2007-64731), and EU-FP7 project
FMTXCT-201792
A method for small-animal PET/CT alignment calibration. Physics in medicine & biology
Small-animal positron-emission tomography/computed tomography (PET/
CT) scanners provide anatomical and molecular imaging, which enables the
joint visualization and analysis of both types of data. A proper alignment
calibration procedure is essential for small-animal imaging since resolution
is much higher than that in human devices. This work presents an alignment
phantom and two different calibration methods that provide a reliable and
repeatable measurement of the spatial geometrical alignment between the PET
and the CT subsystems of a hybrid scanner. The phantom can be built using
laboratory materials, and it is meant to estimate the rigid spatial transformation
that aligns both modalities. It consists of three glass capillaries filled with a
positron-emitter solution and positioned in a non-coplanar triangular geometry
inside the system field of view. The calibration methods proposed are both
based on automatic line detection, but with different approaches to calculate
the transformation of the lines between both modalities. Our results show an
average accuracy of the alignment estimation of 0.39 mm over the whole field
of view.This study was funded by CDTI under the CENIT Program (AMIT Project), projects
ARTEMIS S2009/DPI-1802 (CAM) and TEC2010-21619-C04-01, and supported by the
Spanish Ministry of Economy and Competitiveness
FDOT reconstruction and setting optimization using singular value analysis with automatic thresholding
Comunicació presentada a: 2009 IEEE Nuclear Science Symposium Conference Record, celebrada a Orlando, Florida, Estats Units d'Amèrica, del 24 d'octubre a l'1 de novembre de 2009.Fluorescence Enhanced Diffuse Optical Tomography (FDOT) retrieves 3D distributions of fluorophore concentration in small animals, non-invasively and in vivo. The FDOT problem can be formulated as a system of equations, d=Wf, where W is a weight matrix that couples the measurements (d) to the unknown spatial distribution (f) of the fluorophore concentration (forward problem). The Singular Value Decomposition (SVD) of W has been previously employed to solve the inverse problem (image reconstruction) and to study the imaging performance of FDOT. To achieve good image quality it is necessary to determine the number of useful singular values to retain. We use an automatic method that analytically calculates a threshold to select the significant singular values for SVD reconstruction of FDOT experiments previously validity in our laboratory. Afterwards, this work appraises the effect of different settings of the acquisition parameters (distribution of mesh points, density of sources and detectors) of a parallel-plate non-contact FDOT, in order to achieve the best possible imaging performance, i.e., minimum number of singular values of W, maximum information content in acquired measurements and minimum computational cost. We conclude that the use of a mesh with lower density in the direction perpendicular to the plates achieves better performance than the usual isotropic mesh points distribution. Any increase in the number of mesh points, sources and detectors at distances shorter than the photon mean free path leads to slight improvements in image quality while increasing computational burden
High-resolution dynamic cardiac MRI on small animals using reconstruction based on Split Bregman methodology
Comunicació presentada a: Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), celebrada del 23 al 29 d'octubre de 2011 a València.Dynamic cardiac magnetic resonance imaging in
small animals is an important tool in the study of
cardiovascular diseases. The reduction of the long acquisition
times required for cardiovascular applications is crucial to
achieve good spatiotemporal resolution and signal-to-noise
ratio. Nowadays there are many acceleration techniques which
can reduce acquisition time, including compressed sensing
technique. Compressed sensing allows image reconstruction
from undersampled data, by means of a non linear
reconstruction which minimizes the total variation of the
image. The recently appeared Split Bregman methodology has
proved to be more computationally efficient to solve this
problem than classic optimization methods. In the case of
dynamic magnetic resonance imaging, compressed sensing can
exploit time sparsity by the minimization of total variation
across both space and time. In this work, we propose and
validate the Split Bregman method to minimize spatial and
time total variation, and apply this method to accelerate
cardiac cine acquisitions in rats. We found that applying a
quasi-random variable density pattern along the phaseencoding
direction, accelerations up to a factor 5 are possible
with low error. In the future, we expect to obtain higher
accelerations using spatiotemporal undersampling
Influence of absorption and scattering on the quantification of fluorescence diffuse optical tomography using normalized data
Reconstruction algorithms for imaging fluorescence in near infrared ranges usually normalize fluorescence
light with respect to excitation light. Using this approach, we investigated the influence of absorption
and scattering heterogeneities on quantification accuracy when assuming a homogeneous model and explored
possible reconstruction improvements by using a heterogeneous model. To do so, we created several computer-
simulated phantoms: a homogeneous slab phantom (P1), slab phantoms including a region with a two- to
six-fold increase in scattering (P2) and in absorption (P3), and an atlas-based mouse phantom that modeled different
liver and lung scattering (P4). For P1, reconstruction with the wrong optical properties yielded quantification errors
that increased almost linearly with the scattering coefficient while they were mostly negligible regarding the absorption
coefficient. This observation agreed with the theoretical results. Taking the quantification of a homogeneous
phantom as a reference, relative quantification errors obtained when wrongly assuming homogeneous media were
in the range þ41 to þ94% (P2), 0.1 to −7% (P3), and −39 to þ44% (P4). Using a heterogeneous model, the overall
error ranged from −7 to 7%. In conclusion, this work demonstrates that assuming homogeneous media leads to
noticeable quantification errors that can be improved by adopting heterogeneous models.This study was supported by Ministerio de Ciencia e Innovación
(FPI program, TEC 2008-06715, and CENIT AMIT CEN-
20101014), Comunidad de Madrid and European Regional
Development Fund ARTEMIS S2009/DPI-1802, and EU-FP7
project FMTXCT-201792
High-resolution dynamic cardiac MRI on small animals using reconstruction based on Split Bregman methodology
Comunicació presentada a: Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), celebrada del 23 al 29 d'octubre de 2011 a València.Dynamic cardiac magnetic resonance imaging in small animals is an important tool in the study of cardiovascular diseases. The reduction of the long acquisition times required for cardiovascular applications is crucial to achieve good spatiotemporal resolution and signal-to-noise ratio. Nowadays there are many acceleration techniques which can reduce acquisition time, including compressed sensing technique. Compressed sensing allows image reconstruction from undersampled data, by means of a non linear reconstruction which minimizes the total variation of the image. The recently appeared Split Bregman methodology has proved to be more computationally efficient to solve this problem than classic optimization methods. In the case of dynamic magnetic resonance imaging, compressed sensing can exploit time sparsity by the minimization of total variation across both space and time. In this work, we propose and validate the Split Bregman method to minimize spatial and time total variation, and apply this method to accelerate cardiac cine acquisitions in rats. We found that applying a quasi-random variable density pattern along the phaseencoding direction, accelerations up to a factor 5 are possible with low error. In the future, we expect to obtain higher accelerations using spatiotemporal undersampling