3,794 research outputs found

    Advances in computational modelling for personalised medicine after myocardial infarction

    Get PDF
    Myocardial infarction (MI) is a leading cause of premature morbidity and mortality worldwide. Determining which patients will experience heart failure and sudden cardiac death after an acute MI is notoriously difficult for clinicians. The extent of heart damage after an acute MI is informed by cardiac imaging, typically using echocardiography or sometimes, cardiac magnetic resonance (CMR). These scans provide complex data sets that are only partially exploited by clinicians in daily practice, implying potential for improved risk assessment. Computational modelling of left ventricular (LV) function can bridge the gap towards personalised medicine using cardiac imaging in patients with post-MI. Several novel biomechanical parameters have theoretical prognostic value and may be useful to reflect the biomechanical effects of novel preventive therapy for adverse remodelling post-MI. These parameters include myocardial contractility (regional and global), stiffness and stress. Further, the parameters can be delineated spatially to correspond with infarct pathology and the remote zone. While these parameters hold promise, there are challenges for translating MI modelling into clinical practice, including model uncertainty, validation and verification, as well as time-efficient processing. More research is needed to (1) simplify imaging with CMR in patients with post-MI, while preserving diagnostic accuracy and patient tolerance (2) to assess and validate novel biomechanical parameters against established prognostic biomarkers, such as LV ejection fraction and infarct size. Accessible software packages with minimal user interaction are also needed. Translating benefits to patients will be achieved through a multidisciplinary approach including clinicians, mathematicians, statisticians and industry partners

    From Fully-Supervised Single-Task to Semi-Supervised Multi-Task Deep Learning Architectures for Segmentation in Medical Imaging Applications

    Get PDF
    Medical imaging is routinely performed in clinics worldwide for the diagnosis and treatment of numerous medical conditions in children and adults. With the advent of these medical imaging modalities, radiologists can visualize both the structure of the body as well as the tissues within the body. However, analyzing these high-dimensional (2D/3D/4D) images demands a significant amount of time and effort from radiologists. Hence, there is an ever-growing need for medical image computing tools to extract relevant information from the image data to help radiologists perform efficiently. Image analysis based on machine learning has pivotal potential to improve the entire medical imaging pipeline, providing support for clinical decision-making and computer-aided diagnosis. To be effective in addressing challenging image analysis tasks such as classification, detection, registration, and segmentation, specifically for medical imaging applications, deep learning approaches have shown significant improvement in performance. While deep learning has shown its potential in a variety of medical image analysis problems including segmentation, motion estimation, etc., generalizability is still an unsolved problem and many of these successes are achieved at the cost of a large pool of datasets. For most practical applications, getting access to a copious dataset can be very difficult, often impossible. Annotation is tedious and time-consuming. This cost is further amplified when annotation must be done by a clinical expert in medical imaging applications. Additionally, the applications of deep learning in the real-world clinical setting are still limited due to the lack of reliability caused by the limited prediction capabilities of some deep learning models. Moreover, while using a CNN in an automated image analysis pipeline, it’s critical to understand which segmentation results are problematic and require further manual examination. To this extent, the estimation of uncertainty calibration in a semi-supervised setting for medical image segmentation is still rarely reported. This thesis focuses on developing and evaluating optimized machine learning models for a variety of medical imaging applications, ranging from fully-supervised, single-task learning to semi-supervised, multi-task learning that makes efficient use of annotated training data. The contributions of this dissertation are as follows: (1) developing a fully-supervised, single-task transfer learning for the surgical instrument segmentation from laparoscopic images; and (2) utilizing supervised, single-task, transfer learning for segmenting and digitally removing the surgical instruments from endoscopic/laparoscopic videos to allow the visualization of the anatomy being obscured by the tool. The tool removal algorithms use a tool segmentation mask and either instrument-free reference frames or previous instrument-containing frames to fill in (inpaint) the instrument segmentation mask; (3) developing fully-supervised, single-task learning via efficient weight pruning and learned group convolution for accurate left ventricle (LV), right ventricle (RV) blood pool and myocardium localization and segmentation from 4D cine cardiac MR images; (4) demonstrating the use of our fully-supervised memory-efficient model to generate dynamic patient-specific right ventricle (RV) models from cine cardiac MRI dataset via an unsupervised learning-based deformable registration field; and (5) integrating a Monte Carlo dropout into our fully-supervised memory-efficient model with inherent uncertainty estimation, with the overall goal to estimate the uncertainty associated with the obtained segmentation and error, as a means to flag regions that feature less than optimal segmentation results; (6) developing semi-supervised, single-task learning via self-training (through meta pseudo-labeling) in concert with a Teacher network that instructs the Student network by generating pseudo-labels given unlabeled input data; (7) proposing largely-unsupervised, multi-task learning to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two of the foremost critical tasks in medical imaging — segmentation of cardiac structures and reconstruction of the cine cardiac MR images; (8) demonstrating the use of 3D semi-supervised, multi-task learning for jointly learning multiple tasks in a single backbone module – uncertainty estimation, geometric shape generation, and cardiac anatomical structure segmentation of the left atrial cavity from 3D Gadolinium-enhanced magnetic resonance (GE-MR) images. This dissertation summarizes the impact of the contributions of our work in terms of demonstrating the adaptation and use of deep learning architectures featuring different levels of supervision to build a variety of image segmentation tools and techniques that can be used across a wide spectrum of medical image computing applications centered on facilitating and promoting the wide-spread computer-integrated diagnosis and therapy data science

    Modelling mitral valvular dynamics–current trend and future directions

    Get PDF
    Dysfunction of mitral valve causes morbidity and premature mortality and remains a leading medical problem worldwide. Computational modelling aims to understand the biomechanics of human mitral valve and could lead to the development of new treatment, prevention and diagnosis of mitral valve diseases. Compared with the aortic valve, the mitral valve has been much less studied owing to its highly complex structure and strong interaction with the blood flow and the ventricles. However, the interest in mitral valve modelling is growing, and the sophistication level is increasing with the advanced development of computational technology and imaging tools. This review summarises the state-of-the-art modelling of the mitral valve, including static and dynamics models, models with fluid-structure interaction, and models with the left ventricle interaction. Challenges and future directions are also discussed

    Respiratory-induced organ motion compensation for MRgHIFU

    Get PDF
    Summary: High Intensity Focused Ultrasound is an emerging non-invasive technology for the precise thermal ablation of pathological tissue deep within the body. The fitful, respiratoryinduced motion of abdominal organs, such as of the liver, renders targeting challenging. The work in hand describes methods for imaging, modelling and managing respiratoryinduced organ motion. The main objective is to enable 3D motion prediction of liver tumours for the treatment with Magnetic Resonance guided High Intensity Focused Ultrasound (MRgHIFU). To model and predict respiratory motion, the liver motion is initially observed in 3D space. Fast acquired 2D magnetic resonance images are retrospectively reconstructed to time-resolved volumes, thus called 4DMRI (3D + time). From these volumes, dense deformation fields describing the motion from time-step to time-step are extracted using an intensity-based non-rigid registration algorithm. 4DMRI sequences of 20 subjects, providing long-term recordings of the variability in liver motion under free breathing, serve as the basis for this study. Based on the obtained motion data, three main types of models were investigated and evaluated in clinically relevant scenarios. In particular, subject-specific motion models, inter-subject population-based motion models and the combination of both are compared in comprehensive studies. The analysis of the prediction experiments showed that statistical models based on Principal Component Analysis are well suited to describe the motion of a single subject as well as of a population of different and unobserved subjects. In order to enable target prediction, the respiratory state of the respective organ was tracked in near-real-time and a temporal prediction of its future position is estimated. The time span provided by the prediction is used to calculate the new target position and to readjust the treatment focus. In addition, novel methods for faster acquisition of subject-specific 3D data based on a manifold learner are presented and compared to the state-of-the art 4DMRI method. The developed methods provide motion compensation techniques for the non-invasive and radiation-free treatment of pathological tissue in moving abdominal organs for MRgHIFU. ---------- Zusammenfassung: High Intensity Focused Ultrasound ist eine aufkommende, nicht-invasive Technologie fĂŒr die prĂ€zise thermische Zerstörung von pathologischem Gewebe im Körper. Die unregelmĂ€ssige ateminduzierte Bewegung der Unterleibsorgane, wie z.B. im Fall der Leber, macht genaues Zielen anspruchsvoll. Die vorliegende Arbeit beschreibt Verfahren zur Bildgebung, Modellierung und zur Regelung ateminduzierter Organbewegung. Das Hauptziel besteht darin, 3D Zielvorhersagen fĂŒr die Behandlung von Lebertumoren mittels Magnetic Resonance guided High Intensity Focused Ultrasound (MRgHIFU) zu ermöglichen. Um die Atembewegung modellieren und vorhersagen zu können, wird die Bewegung der Leber zuerst im dreidimensionalen Raum beobachtet. Schnell aufgenommene 2DMagnetresonanz- Bilder wurden dabei rĂŒckwirkend zu Volumen mit sowohl guter zeitlicher als auch rĂ€umlicher Auflösung, daher 4DMRI (3D + Zeit) genannt, rekonstruiert. Aus diesen Volumen werden Deformationsfelder, welche die Bewegung von Zeitschritt zu Zeitschritt beschreiben, mit einem intensitĂ€tsbasierten, nicht-starren Registrierungsalgorithmus extrahiert. 4DMRI-Sequenzen von 20 Probanden, welche Langzeitaufzeichungen von der VariabilitĂ€t der Leberbewegung beinhalten, dienen als Grundlage fĂŒr diese Studie. Basierend auf den gewonnenen Bewegungsdaten wurden drei Arten von Modellen in klinisch relevanten Szenarien untersucht und evaluiert. Personen-spezifische Bewegungsmodelle, populationsbasierende Bewegungsmodelle und die Kombination beider wurden in umfassenden Studien verglichen. Die Analyse der Vorhersage-Experimente zeigte, dass statistische Modelle basierend auf Hauptkomponentenanalyse gut geeignet sind, um die Bewegung einer einzelnen Person sowie einer Population von unterschiedlichen und unbeobachteten Personen zu beschreiben. Die Bewegungsvorhersage basiert auf der AbschĂ€tzung der Organposition, welche fast in Echtzeit verfolgt wird. Die durch die Vorhersage bereitgestellte Zeitspanne wird verwendet, um die neue Zielposition zu berechnen und den Behandlungsfokus auszurichten. DarĂŒber hinaus werden neue Methoden zur schnelleren Erfassung patienten-spezifischer 3D-Daten und deren Rekonstruktion vorgestellt und mit der gĂ€ngigen 4DMRI-Methode verglichen. Die entwickelten Methoden beschreiben Techniken zur nichtinvasiven und strahlungsfreien Behandlung von krankhaftem Gewebe in bewegten Unterleibsorganen mittels MRgHIFU

    Real-time myocardial landmark tracking for MRI-guided cardiac radio-ablation using Gaussian Processes

    Full text link
    The high speed of cardiorespiratory motion introduces a unique challenge for cardiac stereotactic radio-ablation (STAR) treatments with the MR-linac. Such treatments require tracking myocardial landmarks with a maximum latency of 100 ms, which includes the acquisition of the required data. The aim of this study is to present a new method that allows to track myocardial landmarks from few readouts of MRI data, thereby achieving a latency sufficient for STAR treatments. We present a tracking framework that requires only few readouts of k-space data as input, which can be acquired at least an order of magnitude faster than MR-images. Combined with the real-time tracking speed of a probabilistic machine learning framework called Gaussian Processes, this allows to track myocardial landmarks with a sufficiently low latency for cardiac STAR guidance, including both the acquisition of required data, and the tracking inference. The framework is demonstrated in 2D on a motion phantom, and in vivo on volunteers and a ventricular tachycardia (arrhythmia) patient. Moreover, the feasibility of an extension to 3D was demonstrated by in silico 3D experiments with a digital motion phantom. The framework was compared with template matching - a reference, image-based, method - and linear regression methods. Results indicate an order of magnitude lower total latency (<10 ms) for the proposed framework in comparison with alternative methods. The root-mean-square-distances and mean end-point-distance with the reference tracking method was less than 0.8 mm for all experiments, showing excellent (sub-voxel) agreement. The high accuracy in combination with a total latency of less than 10 ms - including data acquisition and processing - make the proposed method a suitable candidate for tracking during STAR treatments

    ICoNIK: Generating Respiratory-Resolved Abdominal MR Reconstructions Using Neural Implicit Representations in k-Space

    Full text link
    Motion-resolved reconstruction for abdominal magnetic resonance imaging (MRI) remains a challenge due to the trade-off between residual motion blurring caused by discretized motion states and undersampling artefacts. In this work, we propose to generate blurring-free motion-resolved abdominal reconstructions by learning a neural implicit representation directly in k-space (NIK). Using measured sampling points and a data-derived respiratory navigator signal, we train a network to generate continuous signal values. To aid the regularization of sparsely sampled regions, we introduce an additional informed correction layer (ICo), which leverages information from neighboring regions to correct NIK's prediction. Our proposed generative reconstruction methods, NIK and ICoNIK, outperform standard motion-resolved reconstruction techniques and provide a promising solution to address motion artefacts in abdominal MRI
    • 

    corecore