699 research outputs found
Medical Image Analysis using Deep Relational Learning
In the past ten years, with the help of deep learning, especially the rapid
development of deep neural networks, medical image analysis has made remarkable
progress. However, how to effectively use the relational information between
various tissues or organs in medical images is still a very challenging
problem, and it has not been fully studied. In this thesis, we propose two
novel solutions to this problem based on deep relational learning. First, we
propose a context-aware fully convolutional network that effectively models
implicit relation information between features to perform medical image
segmentation. The network achieves the state-of-the-art segmentation results on
the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain
Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new
hierarchical homography estimation network to achieve accurate medical image
mosaicing by learning the explicit spatial relationship between adjacent
frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and
our hierarchical homography estimation network outperforms the other
state-of-the-art mosaicing methods while generating robust and meaningful
mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778
A deep learning based dual encoder–decoder framework for anatomical structure segmentation in chest X-ray images
Automated multi-organ segmentation plays an essential part in the computer-aided diagnostic (CAD) of chest X-ray fluoroscopy. However, developing a CAD system for the anatomical structure segmentation remains challenging due to several indistinct structures, variations in the anatomical structure shape among different individuals, the presence of medical tools, such as pacemakers and catheters, and various artifacts in the chest radiographic images. In this paper, we propose a robust deep learning segmentation framework for the anatomical structure in chest radiographs that utilizes a dual encoder–decoder convolutional neural network (CNN). The first network in the dual encoder–decoder structure effectively utilizes a pre-trained VGG19 as an encoder for the segmentation task. The pre-trained encoder output is fed into the squeeze-and-excitation (SE) to boost the network’s representation power, which enables it to perform dynamic channel-wise feature calibrations. The calibrated features are efficiently passed into the first decoder to generate the mask. We integrated the generated mask with the input image and passed it through a second encoder–decoder network with the recurrent residual blocks and an attention the gate module to capture the additional contextual features and improve the segmentation of the smaller regions. Three public chest X-ray datasets are used to evaluate the proposed method for multi-organs segmentation, such as the heart, lungs, and clavicles, and single-organ segmentation, which include only lungs. The results from the experiment show that our proposed technique outperformed the existing multi-class and single-class segmentation methods
Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review
Deep learning has become a popular tool for medical image analysis, but the
limited availability of training data remains a major challenge, particularly
in the medical field where data acquisition can be costly and subject to
privacy regulations. Data augmentation techniques offer a solution by
artificially increasing the number of training samples, but these techniques
often produce limited and unconvincing results. To address this issue, a
growing number of studies have proposed the use of deep generative models to
generate more realistic and diverse data that conform to the true distribution
of the data. In this review, we focus on three types of deep generative models
for medical image augmentation: variational autoencoders, generative
adversarial networks, and diffusion models. We provide an overview of the
current state of the art in each of these models and discuss their potential
for use in different downstream tasks in medical imaging, including
classification, segmentation, and cross-modal translation. We also evaluate the
strengths and limitations of each model and suggest directions for future
research in this field. Our goal is to provide a comprehensive review about the
use of deep generative models for medical image augmentation and to highlight
the potential of these models for improving the performance of deep learning
algorithms in medical image analysis
Artificial Intelligence: Development and Applications in Neurosurgery
The last decade has witnessed a significant increase in the relevance of artificial intelligence (AI) in neuroscience. Gaining notoriety from its potential to revolutionize medical decision making, data analytics, and clinical workflows, AI is poised to be increasingly implemented into neurosurgical practice. However, certain considerations pose significant challenges to its immediate and widespread implementation. Hence, this chapter will explore current developments in AI as it pertains to the field of clinical neuroscience, with a primary focus on neurosurgery. Additionally included is a brief discussion of important economic and ethical considerations related to the feasibility and implementation of AI-based technologies in neurosciences, including future horizons such as the operational integrations of human and non-human capabilities
The 2023 wearable photoplethysmography roadmap
Photoplethysmography is a key sensing technology which is used in wearable devices such as smartwatches and fitness trackers. Currently, photoplethysmography sensors are used to monitor physiological parameters including heart rate and heart rhythm, and to track activities like sleep and exercise. Yet, wearable photoplethysmography has potential to provide much more information on health and wellbeing, which could inform clinical decision making. This Roadmap outlines directions for research and development to realise the full potential of wearable photoplethysmography. Experts discuss key topics within the areas of sensor design, signal processing, clinical applications, and research directions. Their perspectives provide valuable guidance to researchers developing wearable photoplethysmography technology
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange
Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy.
In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur.
In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease
The optimal connection model for blood vessels segmentation and the MEA-Net
Vascular diseases have long been regarded as a significant health concern.
Accurately detecting the location, shape, and afflicted regions of blood
vessels from a diverse range of medical images has proven to be a major
challenge. Obtaining blood vessels that retain their correct topological
structures is currently a crucial research issue. Numerous efforts have sought
to reinforce neural networks' learning of vascular geometric features,
including measures to ensure the correct topological structure of the
segmentation result's vessel centerline. Typically, these methods extract
topological features from the network's segmentation result and then apply
regular constraints to reinforce the accuracy of critical components and the
overall topological structure. However, as blood vessels are three-dimensional
structures, it is essential to achieve complete local vessel segmentation,
which necessitates enhancing the segmentation of vessel boundaries.
Furthermore, current methods are limited to handling 2D blood vessel
fragmentation cases. Our proposed boundary attention module directly extracts
boundary voxels from the network's segmentation result. Additionally, we have
established an optimal connection model based on minimal surfaces to determine
the connection order between blood vessels. Our method achieves
state-of-the-art performance in 3D multi-class vascular segmentation tasks, as
evidenced by the high values of Dice Similarity Coefficient (DSC) and
Normalized Surface Dice (NSD) metrics. Furthermore, our approach improves the
Betti error, LR error, and BR error indicators of vessel richness and
structural integrity by more than 10% compared to other methods, and
effectively addresses vessel fragmentation and yields blood vessels with a more
precise topological structure.Comment: 19 page
Development of Quantitative Bone SPECT Analysis Methods for Metastatic Bone Disease
Prostate cancer is one of the most prevalent types of cancer in males in the United States. Bone is a common site of metastases for metastatic prostate cancer. However, bone metastases are often considered “unmeasurable” using standard anatomic imaging and the RECIST 1.1 criteria. As a result, response to therapy is often suboptimally evaluated by visual interpretation of planar bone scintigraphy with response criteria related to the presence or absence of new lesions. With the commercial availability of quantitative single-photon emission computed tomography (SPECT) methods, it is now feasible to establish quantitative metrics of therapy response by skeletal metastases. Quantitative bone SPECT (QBSPECT) may provide the ability to estimate bone lesion uptake, volume, and the number of lesions more accurately than planar imaging. However, the accuracy of activity quantification in QBSPECT relies heavily on the precision with which bone metastases and bone structures are delineated. In this research, we aim at developing automated image segmentation methods for fast and accurate delineation of bone and bone metastases in QBSPECT. To begin, we developed registration methods to generate a dataset of realistic and anatomically-varying computerized phantoms for use in QBSPECT simulations. Using these simulations, we develop supervised computer-automated segmentation methods to minimize intra- and inter-observer variations in delineating bone metastases. This project provides accurate segmentation techniques for QBSPECT and paves the way for the development of QBSPECT methods for assessing bone metastases’ therapy response
The Liver Tumor Segmentation Benchmark (LiTS)
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094
- …