179 research outputs found
Fabric Image Representation Encoding Networks for Large-scale 3D Medical Image Analysis
Deep neural networks are parameterised by weights that encode feature
representations, whose performance is dictated through generalisation by using
large-scale feature-rich datasets. The lack of large-scale labelled 3D medical
imaging datasets restrict constructing such generalised networks. In this work,
a novel 3D segmentation network, Fabric Image Representation Networks
(FIRENet), is proposed to extract and encode generalisable feature
representations from multiple medical image datasets in a large-scale manner.
FIRENet learns image specific feature representations by way of 3D fabric
network architecture that contains exponential number of sub-architectures to
handle various protocols and coverage of anatomical regions and structures. The
fabric network uses Atrous Spatial Pyramid Pooling (ASPP) extended to 3D to
extract local and image-level features at a fine selection of scales. The
fabric is constructed with weighted edges allowing the learnt features to
dynamically adapt to the training data at an architecture level. Conditional
padding modules, which are integrated into the network to reinsert voxels
discarded by feature pooling, allow the network to inherently process
different-size images at their original resolutions. FIRENet was trained for
feature learning via automated semantic segmentation of pelvic structures and
obtained a state-of-the-art median DSC score of 0.867. FIRENet was also
simultaneously trained on MR (Magnatic Resonance) images acquired from 3D
examinations of musculoskeletal elements in the (hip, knee, shoulder) joints
and a public OAI knee dataset to perform automated segmentation of bone across
anatomy. Transfer learning was used to show that the features learnt through
the pelvic segmentation helped achieve improved mean DSC scores of 0.962,
0.963, 0.945 and 0.986 for automated segmentation of bone across datasets.Comment: 12 pages, 10 figure
Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis
The availability of large-scale annotated image datasets and recent advances
in supervised deep learning methods enable the end-to-end derivation of
representative image features that can impact a variety of image analysis
problems. Such supervised approaches, however, are difficult to implement in
the medical domain where large volumes of labelled data are difficult to obtain
due to the complexity of manual annotation and inter- and intra-observer
variability in label assignment. We propose a new convolutional sparse kernel
network (CSKN), which is a hierarchical unsupervised feature learning framework
that addresses the challenge of learning representative visual features in
medical image analysis domains where there is a lack of annotated training
data. Our framework has three contributions: (i) We extend kernel learning to
identify and represent invariant features across image sub-patches in an
unsupervised manner. (ii) We initialise our kernel learning with a layer-wise
pre-training scheme that leverages the sparsity inherent in medical images to
extract initial discriminative features. (iii) We adapt a multi-scale spatial
pyramid pooling (SPP) framework to capture subtle geometric differences between
learned visual features. We evaluated our framework in medical image retrieval
and classification on three public datasets. Our results show that our CSKN had
better accuracy when compared to other conventional unsupervised methods and
comparable accuracy to methods that used state-of-the-art supervised
convolutional neural networks (CNNs). Our findings indicate that our
unsupervised CSKN provides an opportunity to leverage unannotated big data in
medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional
Sparse Kernel Network for Unsupervised Medical Image Analysis'). The
manuscript is available from following link
(https://doi.org/10.1016/j.media.2019.06.005
Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning
Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and
direct surgical procedures, and to track the development of bone-related diseases. This often
involves radiologists who have to annotate bones manually or in a semi-automatic way, which is
a time consuming task. Their annotation workload can be reduced by automated segmentation
and detection of individual bones. This automation of distinct bone segmentation not only has
the potential to accelerate current workflows but also opens up new possibilities for processing
and presenting medical data for planning, navigation, and education.
In this thesis, we explored the use of deep learning for automating the segmentation of all
individual bones within an upper-body CT scan. To do so, we had to find a network architec-
ture that provides a good trade-off between the problem’s high computational demands and the
results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out
to eliminate the most prevalent types of error. To do so, we introduced an novel method called
binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin-
guishing bone from non-bone is conducted separately from identifying the individual bones.
Both predictions are then merged, which leads to superior results. Another type of error is tack-
led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger
fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input
into the network while keeping the growth of additional pixels in check.
Overall, we present a deep-learning-based method that reliably segments most of the over
one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter
quickly enough to be used in interactive software. Our algorithm has been included in our
groups virtual reality medical image visualisation software SpectoVR with the plan to be used
as one of the puzzle piece in surgical planning and navigation, as well as in the education of
future doctors
SegmentAnyBone: A Universal Model that Segments Any Bone at Any Location on MRI
Magnetic Resonance Imaging (MRI) is pivotal in radiology, offering
non-invasive and high-quality insights into the human body. Precise
segmentation of MRIs into different organs and tissues would be highly
beneficial since it would allow for a higher level of understanding of the
image content and enable important measurements, which are essential for
accurate diagnosis and effective treatment planning. Specifically, segmenting
bones in MRI would allow for more quantitative assessments of musculoskeletal
conditions, while such assessments are largely absent in current radiological
practice. The difficulty of bone MRI segmentation is illustrated by the fact
that limited algorithms are publicly available for use, and those contained in
the literature typically address a specific anatomic area. In our study, we
propose a versatile, publicly available deep-learning model for bone
segmentation in MRI across multiple standard MRI locations. The proposed model
can operate in two modes: fully automated segmentation and prompt-based
segmentation. Our contributions include (1) collecting and annotating a new MRI
dataset across various MRI protocols, encompassing over 300 annotated volumes
and 8485 annotated slices across diverse anatomic regions; (2) investigating
several standard network architectures and strategies for automated
segmentation; (3) introducing SegmentAnyBone, an innovative foundational
model-based approach that extends Segment Anything Model (SAM); (4) comparative
analysis of our algorithm and previous approaches; and (5) generalization
analysis of our algorithm across different anatomical locations and MRI
sequences, as well as an external dataset. We publicly release our model at
https://github.com/mazurowski-lab/SegmentAnyBone.Comment: 15 pages, 15 figure
Image Processing and Analysis for Preclinical and Clinical Applications
Radiomics is one of the most successful branches of research in the field of image processing and analysis, as it provides valuable quantitative information for the personalized medicine. It has the potential to discover features of the disease that cannot be appreciated with the naked eye in both preclinical and clinical studies. In general, all quantitative approaches based on biomedical images, such as positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), have a positive clinical impact in the detection of biological processes and diseases as well as in predicting response to treatment. This Special Issue, “Image Processing and Analysis for Preclinical and Clinical Applications”, addresses some gaps in this field to improve the quality of research in the clinical and preclinical environment. It consists of fourteen peer-reviewed papers covering a range of topics and applications related to biomedical image processing and analysis
Explainable artificial intelligence (XAI) in deep learning-based medical image analysis
With an increase in deep learning-based methods, the call for explainability
of such methods grows, especially in high-stakes decision making areas such as
medical image analysis. This survey presents an overview of eXplainable
Artificial Intelligence (XAI) used in deep learning-based medical image
analysis. A framework of XAI criteria is introduced to classify deep
learning-based medical image analysis methods. Papers on XAI techniques in
medical image analysis are then surveyed and categorized according to the
framework and according to anatomical location. The paper concludes with an
outlook of future opportunities for XAI in medical image analysis.Comment: Submitted for publication. Comments welcome by email to first autho
- …