1,295 research outputs found

    A Time-Evolving 3D Method Dedicated to the Reconstruction of Solar plumes and Results Using Extreme Ultra-Violet Data

    Get PDF
    An important issue in the tomographic reconstruction of the solar poles is the relatively rapid evolution of the polar plumes. We demonstrate that it is possible to take into account this temporal evolution in the reconstruction. The difficulty of this problem comes from the fact that we want a 4D reconstruction (three spatial dimensions plus time) while we only have 3D data (2D images plus time). To overcome this difficulty, we introduce a model that describes polar plumes as stationary objects whose intensity varies homogeneously with time. This assumption can be physically justified if one accepts the stability of the magnetic structure. This model leads to a bilinear inverse problem. We describe how to extend linear inversion methods to these kinds of problems. Studies of simulations show the reliability of our method. Results for SOHO/EIT data show that we are able to estimate the temporal evolution of polar plumes in order to improve the reconstruction of the solar poles from only one point of view. We expect further improvements from STEREO/EUVI data when the two probes will be separated by about 60 degrees

    Identifying Humans by the Shape of Their Heartbeats and Materials by Their X-Ray Scattering Profiles

    Get PDF
    Security needs at access control points presents itself in the form of human identification and/or material identification. The field of Biometrics deals with the problem of identifying individuals based on the signal measured from them. One approach to material identification involves matching their x-ray scattering profiles with a database of known materials. Classical biometric traits such as fingerprints, facial images, speech, iris and retinal scans are plagued by potential circumvention they could be copied and later used by an impostor. To address this problem, other bodily traits such as the electrical signal acquired from the brain (electroencephalogram) or the heart (electrocardiogram) and the mechanical signals acquired from the heart (heart sound, laser Doppler vibrometry measures of the carotid pulse) have been investigated. These signals depend on the physiology of the body, and require the individual to be alive and present during acquisition, potentially overcoming circumvention. We investigate the use of the electrocardiogram (ECG) and carotid laser Doppler vibrometry (LDV) signal, both individually and in unison, for biometric identity recognition. A parametric modeling approach to system design is employed, where the system parameters are estimated from training data. The estimated model is then validated using testing data. A typical identity recognition system can operate in either the authentication (verification) or identification mode. The performance of the biometric identity recognition systems is evaluated using receiver operating characteristic (ROC) or detection error tradeoff (DET) curves, in the authentication mode, and cumulative match characteristic (CMC) curves, in the identification mode. The performance of the ECG- and LDV-based identity recognition systems is comparable, but is worse than those of classical biometric systems. Authentication performance below 1% equal error rate (EER) can be attained when the training and testing data are obtained from a single measurement session. When the training and testing data are obtained from different measurement sessions, allowing for a potential short-term or long-term change in the physiology, the authentication EER performance degrades to about 6 to 7%. Leveraging both the electrical (ECG) and mechanical (LDV) aspects of the heart, we obtain a performance gain of over 50%, relative to each individual ECG-based or LDV-based identity recognition system, bringing us closer to the performance of classical biometrics, with the added advantage of anti-circumvention. We consider the problem of designing combined x-ray attenuation and scatter systems and the algorithms to reconstruct images from the systems. As is the case within a computational imaging framework, we tackle the problem by taking a joint system and algorithm design approach. Accurate modeling of the attenuation of incident and scattered photons within a scatter imaging setup will ultimately lead to more accurate estimates of the scatter densities of an illuminated object. Such scattering densities can then be used in material classification. In x-ray scatter imaging, tomographic measurements of the forward scatter distribution are used to infer scatter densities within a volume. A mask placed between the object and the detector array provides information about scatter angles. An efficient computational implementation of the forward and backward model facilitates iterative algorithms based upon a Poisson log-likelihood. The design of the scatter imaging system influences the algorithmic choices we make. In turn, the need for efficient algorithms guides the system design. We begin by analyzing an x-ray scatter system fitted with a fanbeam source distribution and flat-panel energy-integrating detectors. Efficient algorithms for reconstructing object scatter densities from scatter measurements made on this system are developed. Building on the fanbeam source, energy-integrating at-panel detection model, we develop a pencil beam model and an energy-sensitive detection model. The scatter forward models and reconstruction algorithms are validated on simulated, Monte Carlo, and real data. We describe a prototype x-ray attenuation scanner, co-registered with the scatter system, which was built to provide complementary attenuation information to the scatter reconstruction and present results of applying alternating minimization reconstruction algorithms on measurements from the scanner

    Uncertainty Quantification and Reduction in Cardiac Electrophysiological Imaging

    Get PDF
    Cardiac electrophysiological (EP) imaging involves solving an inverse problem that infers cardiac electrical activity from body-surface electrocardiography data on a physical domain defined by the body torso. To avoid unreasonable solutions that may fit the data, this inference is often guided by data-independent prior assumptions about different properties of cardiac electrical sources as well as the physical domain. However, these prior assumptions may involve errors and uncertainties that could affect the inference accuracy. For example, common prior assumptions on the source properties, such as fixed spatial and/or temporal smoothness or sparseness assumptions, may not necessarily match the true source property at different conditions, leading to uncertainties in the inference. Furthermore, prior assumptions on the physical domain, such as the anatomy and tissue conductivity of different organs in the thorax model, represent an approximation of the physical domain, introducing errors to the inference. To determine the robustness of the EP imaging systems for future clinical practice, it is important to identify these errors/uncertainties and assess their impact on the solution. This dissertation focuses on the quantification and reduction of the impact of uncertainties caused by prior assumptions/models on cardiac source properties as well as anatomical modeling uncertainties on the EP imaging solution. To assess the effect of fixed prior assumptions/models about cardiac source properties on the solution of EP imaging, we propose a novel yet simple Lp-norm regularization method for volumetric cardiac EP imaging. This study reports the necessity of an adaptive prior model (rather than fixed model) for constraining the complex spatiotemporally changing properties of the cardiac sources. We then propose a multiple-model Bayesian approach to cardiac EP imaging that employs a continuous combination of prior models, each re-effecting a specific spatial property for volumetric sources. The 3D source estimation is then obtained as a weighted combination of solutions across all models. Including a continuous combination of prior models, our proposed method reduces the chance of mismatch between prior models and true source properties, which in turn enhances the robustness of the EP imaging solution. To quantify the impact of anatomical modeling uncertainties on the EP imaging solution, we propose a systematic statistical framework. Founded based on statistical shape modeling and unscented transform, our method quantifies anatomical modeling uncertainties and establish their relation to the EP imaging solution. Applied on anatomical models generated from different image resolutions and different segmentations, it reports the robustness of EP imaging solution to these anatomical shape-detail variations. We then propose a simplified anatomical model for the heart that only incorporates certain subject-specific anatomical parameters, while discarding local shape details. Exploiting less resources and processing for successful EP imaging, this simplified model provides a simple clinically-compatible anatomical modeling experience for EP imaging systems. Different components of our proposed methods are validated through a comprehensive set of synthetic and real-data experiments, including various typical pathological conditions and/or diagnostic procedures, such as myocardial infarction and pacing. Overall, the methods presented in this dissertation for the quantification and reduction of uncertainties in cardiac EP imaging enhance the robustness of EP imaging, helping to close the gap between EP imaging in research and its clinical application

    Bayesian ECG reconstruction using denoising diffusion generative models

    Full text link
    In this work, we propose a denoising diffusion generative model (DDGM) trained with healthy electrocardiogram (ECG) data that focuses on ECG morphology and inter-lead dependence. Our results show that this innovative generative model can successfully generate realistic ECG signals. Furthermore, we explore the application of recent breakthroughs in solving linear inverse Bayesian problems using DDGM. This approach enables the development of several important clinical tools. These include the calculation of corrected QT intervals (QTc), effective noise suppression of ECG signals, recovery of missing ECG leads, and identification of anomalous readings, enabling significant advances in cardiac health monitoring and diagnosis

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Learning with Limited Labeled Data in Biomedical Domain by Disentanglement and Semi-Supervised Learning

    Get PDF
    In this dissertation, we are interested in improving the generalization of deep neural networks for biomedical data (e.g., electrocardiogram signal, x-ray images, etc). Although deep neural networks have attained state-of-the-art performance and, thus, deployment across a variety of domains, similar performance in the clinical setting remains challenging due to its ineptness to generalize across unseen data (e.g., new patient cohort). We address this challenge of generalization in the deep neural network from two perspectives: 1) learning disentangled representations from the deep network, and 2) developing efficient semi-supervised learning (SSL) algorithms using the deep network. In the former, we are interested in designing specific architectures and objective functions to learn representations, where variations in the data are well separated, i.e., disentangled. In the latter, we are interested in designing regularizers that encourage the underlying neural function\u27s behavior toward a common inductive bias to avoid over-fitting the function to small labeled data. Our end goal is to improve the generalization of the deep network for the diagnostic model in both of these approaches. In disentangled representations, this translates to appropriately learning latent representations from the data, capturing the observed input\u27s underlying explanatory factors in an independent and interpretable way. With data\u27s expository factors well separated, such disentangled latent space can then be useful for a large variety of tasks and domains within data distribution even with a small amount of labeled data, thus improving generalization. In developing efficient semi-supervised algorithms, this translates to utilizing a large volume of the unlabelled dataset to assist the learning from the limited labeled dataset, commonly encountered situation in the biomedical domain. By drawing ideas from different areas within deep learning like representation learning (e.g., autoencoder), variational inference (e.g., variational autoencoder), Bayesian nonparametric (e.g., beta-Bernoulli process), learning theory (e.g., analytical learning theory), function smoothing (Lipschitz Smoothness), etc., we propose several leaning algorithms to improve generalization in the associated task. We test our algorithms on real-world clinical data and show that our approach yields significant improvement over existing methods. Moreover, we demonstrate the efficacy of the proposed models in the benchmark data and simulated data to understand different aspects of the proposed learning methods. We conclude by identifying some of the limitations of the proposed methods, areas of further improvement, and broader future directions for the successful adoption of AI models in the clinical environment

    Tracking the Position of the Heart From Body Surface Potential Maps and Electrograms

    Get PDF
    The accurate generation of forward models is an important element in general research in electrocardiography, and in particular for the techniques for ElectroCardioGraphic Imaging (ECGI). Recent research efforts have been devoted to the reliable and fast generation of forward models. However, these model can suffer from several sources of inaccuracy, which in turn can lead to considerable error in both the forward simulation of body surface potentials and even more so for ECGI solutions. In particular, the accurate localization of the heart within the torso is sensitive to movements due to respiration and changes in position of the subject, a problem that cannot be resolved with better imaging and segmentation alone. Here, we propose an algorithm to localize the position of the heart using electrocardiographic recordings on both the heart and torso surface over a sequence of cardiac cycles. We leverage the dependency of electrocardiographic forward models on the underlying geometry to parameterize the forward model with respect to the position (translation) and orientation of the heart, and then estimate these parameters from heart and body surface potentials in a numerical inverse problem. We show that this approach is capable of localizing the position of the heart in synthetic experiments and that it reduces the modeling error in the forward models and resulting inverse solutions in canine experiments. Our results show a consistent decrease in error of both simulated body surface potentials and inverse reconstructed heart surface potentials after re-localizing the heart based on our estimated geometric correction. These results suggest that this method is capable of improving electrocardiographic models used in research settings and suggest the basis for the extension of the model presented here to its application in a purely inverse setting, where the heart potentials are unknown

    Dynamic Analysis of X-ray Angiography for Image-Guided Coronary Interventions

    Get PDF
    Percutaneous coronary intervention (PCI) is a minimally-invasive procedure for treating patients with coronary artery disease. PCI is typically performed with image guidance using X-ray angiograms (XA) in which coronary arter

    Coronary Artery Segmentation and Motion Modelling

    No full text
    Conventional coronary artery bypass surgery requires invasive sternotomy and the use of a cardiopulmonary bypass, which leads to long recovery period and has high infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery based on image guided robotic surgical approaches have been developed to allow the clinicians to conduct the bypass surgery off-pump with only three pin holes incisions in the chest cavity, through which two robotic arms and one stereo endoscopic camera are inserted. However, the restricted field of view of the stereo endoscopic images leads to possible vessel misidentification and coronary artery mis-localization. This results in 20-30% conversion rates from TECAB surgery to the conventional approach. We have constructed patient-specific 3D + time coronary artery and left ventricle motion models from preoperative 4D Computed Tomography Angiography (CTA) scans. Through temporally and spatially aligning this model with the intraoperative endoscopic views of the patient's beating heart, this work assists the surgeon to identify and locate the correct coronaries during the TECAB precedures. Thus this work has the prospect of reducing the conversion rate from TECAB to conventional coronary bypass procedures. This thesis mainly focus on designing segmentation and motion tracking methods of the coronary arteries in order to build pre-operative patient-specific motion models. Various vessel centreline extraction and lumen segmentation algorithms are presented, including intensity based approaches, geometric model matching method and morphology-based method. A probabilistic atlas of the coronary arteries is formed from a group of subjects to facilitate the vascular segmentation and registration procedures. Non-rigid registration framework based on a free-form deformation model and multi-level multi-channel large deformation diffeomorphic metric mapping are proposed to track the coronary motion. The methods are applied to 4D CTA images acquired from various groups of patients and quantitatively evaluated

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF
    • …
    corecore