104 research outputs found

    A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    Full text link
    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkagethresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) randomshift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the leastmean absolute error, the leastmean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches

    Reasoning with Uncertainty in Deep Learning for Safer Medical Image Computing

    Get PDF
    Deep learning is now ubiquitous in the research field of medical image computing. As such technologies progress towards clinical translation, the question of safety becomes critical. Once deployed, machine learning systems unavoidably face situations where the correct decision or prediction is ambiguous. However, the current methods disproportionately rely on deterministic algorithms, lacking a mechanism to represent and manipulate uncertainty. In safety-critical applications such as medical imaging, reasoning under uncertainty is crucial for developing a reliable decision making system. Probabilistic machine learning provides a natural framework to quantify the degree of uncertainty over different variables of interest, be it the prediction, the model parameters and structures, or the underlying data (images and labels). Probability distributions are used to represent all the uncertain unobserved quantities in a model and how they relate to the data, and probability theory is used as a language to compute and manipulate these distributions. In this thesis, we explore probabilistic modelling as a framework to integrate uncertainty information into deep learning models, and demonstrate its utility in various high-dimensional medical imaging applications. In the process, we make several fundamental enhancements to current methods. We categorise our contributions into three groups according to the types of uncertainties being modelled: (i) predictive; (ii) structural and (iii) human uncertainty. Firstly, we discuss the importance of quantifying predictive uncertainty and understanding its sources for developing a risk-averse and transparent medical image enhancement application. We demonstrate how a measure of predictive uncertainty can be used as a proxy for the predictive accuracy in the absence of ground-truths. Furthermore, assuming the structure of the model is flexible enough for the task, we introduce a way to decompose the predictive uncertainty into its orthogonal sources i.e. aleatoric and parameter uncertainty. We show the potential utility of such decoupling in providing a quantitative “explanations” into the model performance. Secondly, we introduce our recent attempts at learning model structures directly from data. One work proposes a method based on variational inference to learn a posterior distribution over connectivity structures within a neural network architecture for multi-task learning, and share some preliminary results in the MR-only radiotherapy planning application. Another work explores how the training algorithm of decision trees could be extended to grow the architecture of a neural network to adapt to the given availability of data and the complexity of the task. Lastly, we develop methods to model the “measurement noise” (e.g., biases and skill levels) of human annotators, and integrate this information into the learning process of the neural network classifier. In particular, we show that explicitly modelling the uncertainty involved in the annotation process not only leads to an improvement in robustness to label noise, but also yields useful insights into the patterns of errors that characterise individual experts

    Registration and analysis of dynamic magnetic resonance image series

    Get PDF
    Cystic fibrosis (CF) is an autosomal-recessive inherited metabolic disorder that affects all organs in the human body. Patients affected with CF suffer particularly from chronic inflammation and obstruction of the airways. Through early detection, continuous monitoring methods, and new treatments, the life expectancy of patients with CF has been increased drastically in the last decades. However, continuous monitoring of the disease progression is essential for a successful treatment. The current state-of-the-art method for lung disease detection and monitoring is computed tomography (CT) or X-ray. These techniques are ill-suited for the monitoring of disease progressions because of the ionizing radiation the patient is exposed during the examination. Through the development of new magnetic resonance imaging (MRI) sequences and evaluation methods, MRI is able to measure physiological changes in the lungs. The process to create physiological maps, i.e. ventilation and perfusion maps, of the lungs using MRI can be split up into three parts: MR-acquisition, image registration, and image analysis. In this work, we present different methods for the image registration part and the image analysis part. We developed a graph-based registration method for 2D dynamic MR image series of the lungs in order to overcome the problem of sliding motion at organ boundaries. Furthermore, we developed a human-inspired learning-based registration method. Here, the registration is defined as a sequence of local transformations. The sequence-based approach combines the advantage of dense transformation models, i.e. large space of transformations, and the advantage of interpolating transformation models, i.e. smooth local transformations. We also developed a general registration framework called Autograd Image Registration Laboratory (AIRLab), which performs automatic calculation of the gradients for the registration process. This allows rapid prototyping and an easy implementation of existing registration algorithms. For the image analysis part, we developed a deep-learning approach based on gated recurrent units that are able to calculate ventilation maps with less than a third of the number of images of the current method. Automatic defect detection in the estimated MRI ventilation and perfusion maps is essential for the clinical routine to automatically evaluate the treatment progression. We developed a weakly supervised method that is able to infer a pixel-wise defect segmentation by using only a continuous global label during training. In this case, we directly use the lung clearance index (LCI) as a global weak label, without any further manual annotations. The LCI is a global measure to describe ventilation inhomogeneities of the lungs and is obtained by a multiple breath washout test

    Deep learning for fast and robust medical image reconstruction and analysis

    Get PDF
    Medical imaging is an indispensable component of modern medical research as well as clinical practice. Nevertheless, imaging techniques such as magnetic resonance imaging (MRI) and computational tomography (CT) are costly and are less accessible to the majority of the world. To make medical devices more accessible, affordable and efficient, it is crucial to re-calibrate our current imaging paradigm for smarter imaging. In particular, as medical imaging techniques have highly structured forms in the way they acquire data, they provide us with an opportunity to optimise the imaging techniques holistically by leveraging data. The central theme of this thesis is to explore different opportunities where we can exploit data and deep learning to improve the way we extract information for better, faster and smarter imaging. This thesis explores three distinct problems. The first problem is the time-consuming nature of dynamic MR data acquisition and reconstruction. We propose deep learning methods for accelerated dynamic MR image reconstruction, resulting in up to 10-fold reduction in imaging time. The second problem is the redundancy in our current imaging pipeline. Traditionally, imaging pipeline treated acquisition, reconstruction and analysis as separate steps. However, we argue that one can approach them holistically and optimise the entire pipeline jointly for a specific target goal. To this end, we propose deep learning approaches for obtaining high fidelity cardiac MR segmentation directly from significantly undersampled data, greatly exceeding the undersampling limit for image reconstruction. The final part of this thesis tackles the problem of interpretability of the deep learning algorithms. We propose attention-models that can implicitly focus on salient regions in an image to improve accuracy for ultrasound scan plane detection and CT segmentation. More crucially, these models can provide explainability, which is a crucial stepping stone for the harmonisation of smart imaging and current clinical practice.Open Acces

    The anthropometric, environmental and genetic determinants of right ventricular structure and function

    Get PDF
    BACKGROUND Measures of right ventricular (RV) structure and function have significant prognostic value. The right ventricle is currently assessed by global measures, or point surrogates, which are insensitive to regional and directional changes. We aim to create a high-resolution three-dimensional RV model to improve understanding of its structural and functional determinants. These may be particularly of interest in pulmonary hypertension (PH), a condition in which RV function and outcome are strongly linked. PURPOSE To investigate the feasibility and additional benefit of applying three-dimensional phenotyping and contemporary statistical and genetic approaches to large patient populations. METHODS Healthy subjects and incident PH patients were prospectively recruited. Using a semi-automated atlas-based segmentation algorithm, 3D models characterising RV wall position and displacement were developed, validated and compared with anthropometric, physiological and genetic influences. Statistical techniques were adapted from other high-dimensional approaches to deal with the problems of multiple testing, contiguity, sparsity and computational burden. RESULTS 1527 healthy subjects successfully completed high-resolution 3D CMR and automated segmentation. Of these, 927 subjects underwent next-generation sequencing of the sarcomeric gene titin and 947 subjects completed genotyping of common variants for genome-wide association study. 405 incident PH patients were recruited, of whom 256 completed phenotyping. 3D modelling demonstrated significant reductions in sample size compared to two-dimensional approaches. 3D analysis demonstrated that RV basal-freewall function reflects global functional changes most accurately and that a similar region in PH patients provides stronger survival prediction than all anthropometric, haemodynamic and functional markers. Vascular stiffness, titin truncating variants and common variants may also contribute to changes in RV structure and function. CONCLUSIONS High-resolution phenotyping coupled with computational analysis methods can improve insights into the determinants of RV structure and function in both healthy subjects and PH patients. Large, population-based approaches offer physiological insights relevant to clinical care in selected patient groups.Open Acces

    Entwicklung der multifrequenten Magnetresonanz-Elastographie zur Quantifizierung der biophysikalischen Eigenschaften von menschlichem Hirngewebe

    Get PDF
    Magnetic resonance elastography (MRE) is an emerging technique for the quantitative imaging of the biophysical properties of soft tissues in humans. Following its successful clinical application in detecting and characterizing liver fibrosis, the scientific community is investigating the use of viscoelasticity as a biomarker for neurological diseases. Clinical implementation requires a thorough understanding of brain tissue mechanics in conjunction with innovative techniques in new research areas. Therefore, three in vivo studies were conducted to analyze the inherent stiffness dispersion of brain tissue over a wide frequency range, to investigate real-time MRE in monitoring the viscoelastic response of brain tissue during the Valsalva maneuver (VM), and to study mechanical alterations of small lesions in multiple sclerosis (MS). Ultra-low frequency MRE with profile-based wave analysis was developed in 14 healthy subjects to determine large-scale brain stiffness, from pulsation-induced shear waves (1 Hz) to ultra-low frequencies (5 – 10 Hz) to the conventional range (20 – 40 Hz). Furthermore, multifrequency real-time MRE with a frame rate of 5.4 Hz was introduced to analyze stiffness and fluidity changes in response to respiratory challenges and cerebral autoregulation in 17 healthy subjects. 2D and 3D wavenumber-based stiffness reconstruction of the brain was established for conventional MRE in 12 MS patients. MS lesions were analyzed in terms of mechanical contrast with surrounding tissue in relation to white matter (WM) heterogeneity. We found superviscous properties of brain tissue at large scales with a strong stiffness dispersion and a relatively high model-based viscosity of η = 6.6 ± 0.3 Pa∙s. The brain’s viscoelasticity was affected by perfusion changes during VM, which was associated with an increase in brain stiffness of 6.7% ± 4.1% (p<.001), whereas fluidity decreased by -2.1 ± 1.4% (p<.001). In the diseased brain, the analysis of 147 MS lesions revealed 46% of lesions to be softer and 54% of lesions to be stiffer than surrounding tissue. However, due to the heterogeneity of WM stiffness, the results provide no significant evidence for a systematic pattern of mechanical variations in MS. Nevertheless, the results may explain, for the first time, the gap between static ex vivo and dynamic in vivo methods. Fluidity-induced dispersion provides rich information on the structure of tissue compartments. Moreover, viscoelasticity is affected by perfusion during cerebral autoregulation and thus may be sensitive to intracranial pressure modulation. The overall heterogeneity of stiffness obscures changes in MS lesions, and MS may not exhibit sclerosis as a mechanical signature. In summary, this thesis contributes to the field of human brain MRE by presenting new methods developed in studies conducted in new research areas using state-of-the-art technology. The results advance clinical applications and open exciting possibilities for future in vivo studies of human brain tissue.Die Magnetresonanz-Elastographie (MRE) ist ein Verfahren zur quantitativen Darstellung der viskoelastischen Eigenschaften von Weichgewebe. Nach der erfolgreichen klinischen Anwendung in der Leberdiagnostik wird versucht, Viskoelastizität als Biomarker für neurologische Krankheiten zu nutzen. Hierzu bedarf es einer genauen Analyse der Gewebemechanik und innovativen Anwendungsgebieten. Daher, wurden drei Studien durchgeführt, um die Steifigkeitsdispersion von Hirngewebe zu analysieren, das viskoelastische Verhalten während des Valsalva Manövers (VM) abzubilden, und die mechanischen Veränderungen in Läsionen bei Multipler Sklerose (MS) zu untersuchen. Niedrigfrequenz-MRE mit profilbasierter Wellenanalyse wurde in 14 Probanden entwickelt, um die Steifigkeit des Gesamthirns von pulsationsinduzierten Scherwellen (1 Hz) über ultraniedrige Frequenzen (5 – 10 Hz) bis hin zum konventionellen Bereich (20 – 40 Hz) zu bestimmen. Außerdem wurde die multifrequente Echtzeit-MRE mit einer Bildfrequenz von 6.4 Hz eingeführt, um die viskoelastische Antwort des Gehirns auf respiratorische Herausforderungen bei 17 gesunden Probanden zu untersuchen. Neue 2D- und 3D-Wellenzahl-basierte Steifigkeitsrekonstruktionen für das Gehirn wurden in 12 MS Patienten und konventioneller MRE entwickelt. Die Steifigkeitsänderungen in MS-Läsionen wurden mit umliegender weißer Substanz und dessen Heterogenität verglichen. Wir fanden superviskose Eigenschaften des Hirngewebes mit einer starken Dispersion und relativ hohen, modellbasierten Viskosität von η = 6,6 ± 0,3 Pa∙s. Die mechanischen Gewebeeigenschaften wurden durch Perfusionsänderungen während VM beeinflusst und die Hirnsteifigkeit erhöhte sich um 6,7 ± 4,1% (p<.001) wobei sich die Fluidität um -2,1 ± 1,4% (p<.001) verringerte. Die Analyse von 147 MS-Läsionen ergab, dass 54% bzw. 46% der Läsionen steifer bzw. weicher sind als das umgebende Gewebe. Aufgrund der Heterogenität der WM-Steifigkeit konnte jedoch kein Hinweis auf ein systematisches Muster mechanischer Veränderungen in MS-Läsionen gefunden werden. Die Ergebnisse können zum ersten Mal die Lücke zwischen statischen ex vivo und dynamischen in vivo Methoden erklären. Die fluiditätsinduzierte Dispersion liefert interessante Informationen über die zugrundeliegende Gewebestruktur. Darüber hinaus wird die Viskoelastizität durch die Perfusion während der zerebralen Autoregulation beeinflusst und kann daher empfindlich auf intrakranielle Druckschwankungen reagieren. Die allgemeine Heterogenität der Steifigkeit überschattet die Veränderungen in MS-Läsionen, und somit ist Sklerose möglicherweise kein prominentes Merkmal von MS. Zusammenfassend lässt sich festhalten, dass diese Dissertation einen Beitrag zum Gebiet der MRE leistet, indem neue Methoden und Anwendungen in neuen Forschungsgebieten mit modernster Technologie dargestellt werden. Hierdurch wird die klinische Translation gefördert und spannende Möglichkeiten für zukünftige Studien eröffnet

    An analysis of human movement accelerometery data for stroke rehabilitation assessment

    Get PDF
    Ph. D. Thesis.Human Activity Recognition (HAR) is concerned with the automated inference of what a person is doing at any given time. Recently, small unobtrusive wrist-worn accelerometer sensors have become affordable. Since these sensors are worn by the user, data can be collected, and inference performed, no matter where the user may be. This makes for a more flexible activity recognition method compared to other modalities such as in-home video analysis, lab-based observation, etc. This thesis is concerned with both recognizing subjects activities as well as recovery levels from movement-related disorders such as stroke. In order to perform activity recognition or to assess the degree to which a subject is affected by a movement-related disease (such as stroke), we need to create predictive models. These models output either the inferred activity (e.g. running or walking) in a classification model, or else the inferred disease recovery level using either classification or regression (e.g. inferred Chedoke Arm and Hand Activity Inventory Score for stroke rehabilitation assessment). These models use preprocessed data as inputs, a review of preprocessing methods for accelerometer data is given. In this thesis, we provide a systematic exploration of deep learning models for HAR, testing the feasibility of recurrent neural network models for this task. We also discuss modelling recovery levels from stroke based on the number of occurrences of events (based on mixture model components) on each side of the body. We also apply a MultiInstance Learning model to model stroke rehabilitation using accelerometer data, which has both visualization advantages and the potential to also be applicable to other diseases

    A retinal vasculature tracking system guided by a deep architecture

    Get PDF
    Many diseases such as diabetic retinopathy (DR) and cardiovascular diseases show their early signs on retinal vasculature. Analysing the vasculature in fundus images may provide a tool for ophthalmologists to diagnose eye-related diseases and to monitor their progression. These analyses may also facilitate the discovery of new relations between changes on retinal vasculature and the existence or progression of related diseases or to validate present relations. In this thesis, a data driven method, namely a Translational Deep Belief Net (a TDBN), is adapted to vasculature segmentation. The segmentation performance of the TDBN on low resolution images was found to be comparable to that of the best-performing methods. Later, this network is used for the implementation of super-resolution for the segmentation of high resolution images. This approach provided an acceleration during segmentation, which relates to down-sampling ratio of an input fundus image. Finally, the TDBN is extended for the generation of probability maps for the existence of vessel parts, namely vessel interior, centreline, boundary and crossing/bifurcation patterns in centrelines. These probability maps are used to guide a probabilistic vasculature tracking system. Although segmentation can provide vasculature existence in a fundus image, it does not give quantifiable measures for vasculature. The latter has more practical value in medical clinics. In the second half of the thesis, a retinal vasculature tracking system is presented. This system uses Particle Filters to describe vessel morphology and topology. Apart from previous studies, the guidance for tracking is provided with the combination of probability maps generated by the TDBN. The experiments on a publicly available dataset, REVIEW, showed that the consistency of vessel widths predicted by the proposed method was better than that obtained from observers. Moreover, very noisy and low contrast vessel boundaries, which were hardly identifiable to the naked eye, were accurately estimated by the proposed tracking system. Also, bifurcation/crossing locations during the course of tracking were detected almost completely. Considering these promising initial results, future work involves analysing the performance of the tracking system on automatic detection of complete vessel networks in fundus images.Open Acces

    High resolution laboratory x-ray tomography for biomedical research : From design to application

    Get PDF
    Laboratory x-ray micro- and nano-tomography are emerging techniques in biomedical research. Through the use of phase-contrast, sufficient contrast can be achieved in soft tissue to support medical studies. With ongoing developments of x-ray sources and detectors, biomedical studies can increasingly be performed at the laboratory and do not necessary require synchrotron radiation. Particularly nano-focus x-ray sources offer new possibilities for the study of soft tissue. However, with increasing resolution, the complexity and stability requirements on laboratory systems advance as well. This thesis describes the design and implementation of two systems: a micro- CT and a nano-CT, which are used for biomedical imaging.To increase the resolution of the micro-CT, super-resolution imaging is adopted and evaluated for x-ray ima- ging, grating-based imaging and computed tomography utilising electromagnetic stepping of the x-ray source to acquire shifted low-resolution images to estimate a high-resolution image. The experiments have shown that super-resolution can significantly improve the resolution in 2D and 3D imaging, but also that upscaling during the reconstruction can be a viable approach in tomography, which does not require additional images.Element-specific information can be obtained by using photon counting detectors with energy-discriminating thresholds. By performing a material decomposition, a dataset can be split into multiple different materials. Tissue contains a variety of elements with absorption edges in the range of 4 – 11 keV, which can be identified by placing energy thresholds just below and above these edges, as we have demonstrated using human atherosclerotic plaques.An evaluation of radiopaque dyes as alternative contrast agent to identify vessels in lung tissue was performed using phase contrast micro-tomography. We showed that the dye solutions have a sufficiently low density to not cause any artefacts while still being able to separate them from the tissue and distinguish them from each other.Finally, the design and implementation of the nano-CT system is discussed. The system performance is assessed in 2D and 3D, achieving sub-micron resolution and satisfactory tissue contrast through phase contrast. Applica- tion examples are presented using lung tissue, a mouse heart, and freeze dried leaves
    corecore