18 research outputs found
Kidney and Kidney-tumor Segmentation Using Cascaded V-Nets
Kidney cancer is the seventh most common cancer worldwide, accounting for an estimated 140,000 global deaths annually. Kidney segmentation in volumetric medical images plays an important role in clinical diagnosis, radiotherapy planning, interventional guidance and patient follow-ups however, to our knowledge, there is no automatic kidneytumor segmentation method present in the literature. In this paper, we address the challenge of simultaneous semantic segmentation of kidney and tumor by adopting a cascaded V-Net framework. The first V-Net in our pipeline produces a region of interest around the probable location of the kidney and tumor, which facilitates the removal of the unwanted region in the CT volume. The second sets of V-Nets are trained separately for the kidney and tumor, which produces the kidney and tumor masks respectively. The final segmentation is achieved by combining the kidney and tumor mask together. Our method is trained and validated on 190 and 20 patients scans, respectively, accesses from 2019 Kidney Tumor Segmentation Challenge database. We achieved a validation accuracy in terms of the SĂžrensen Dice coefficient of about 97%
Fabric Image Representation Encoding Networks for Large-scale 3D Medical Image Analysis
Deep neural networks are parameterised by weights that encode feature
representations, whose performance is dictated through generalisation by using
large-scale feature-rich datasets. The lack of large-scale labelled 3D medical
imaging datasets restrict constructing such generalised networks. In this work,
a novel 3D segmentation network, Fabric Image Representation Networks
(FIRENet), is proposed to extract and encode generalisable feature
representations from multiple medical image datasets in a large-scale manner.
FIRENet learns image specific feature representations by way of 3D fabric
network architecture that contains exponential number of sub-architectures to
handle various protocols and coverage of anatomical regions and structures. The
fabric network uses Atrous Spatial Pyramid Pooling (ASPP) extended to 3D to
extract local and image-level features at a fine selection of scales. The
fabric is constructed with weighted edges allowing the learnt features to
dynamically adapt to the training data at an architecture level. Conditional
padding modules, which are integrated into the network to reinsert voxels
discarded by feature pooling, allow the network to inherently process
different-size images at their original resolutions. FIRENet was trained for
feature learning via automated semantic segmentation of pelvic structures and
obtained a state-of-the-art median DSC score of 0.867. FIRENet was also
simultaneously trained on MR (Magnatic Resonance) images acquired from 3D
examinations of musculoskeletal elements in the (hip, knee, shoulder) joints
and a public OAI knee dataset to perform automated segmentation of bone across
anatomy. Transfer learning was used to show that the features learnt through
the pelvic segmentation helped achieve improved mean DSC scores of 0.962,
0.963, 0.945 and 0.986 for automated segmentation of bone across datasets.Comment: 12 pages, 10 figure
Simultaneous regression and classification for drug sensitivity prediction using an advanced random forest method
Machine learning methods trained on cancer cell line panels are intensively studied for the prediction
of optimal anti-cancer therapies. While classifcation approaches distinguish efective from inefective
drugs, regression approaches aim to quantify the degree of drug efectiveness. However, the high
specifcity of most anti-cancer drugs induces a skewed distribution of drug response values in favor of
the more drug-resistant cell lines, negatively afecting the classifcation performance (class imbalance)
and regression performance (regression imbalance) for the sensitive cell lines. Here, we present a
novel approach called SimultAneoUs Regression and classifcatiON Random Forests (SAURON-RF)
based on the idea of performing a joint regression and classifcation analysis. We demonstrate that
SAURON-RF improves the classifcation and regression performance for the sensitive cell lines at the
expense of a moderate loss for the resistant ones. Furthermore, our results show that simultaneous
classifcation and regression can be superior to regression or classifcation alone
Automated segmentation of dental CBCT image with prior-guided sequential random forests: Automated segmentation of dental CBCT image
Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT
Combining Shape and Learning for Medical Image Analysis
Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields
Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure
Despite widespread availability of ultrasound and a need for personalised muscle diagnosis (neck/back pain-injury, work related disorder, myopathies, neuropathies), robust, online segmentation of muscles within complex groups remains unsolved by existing methods. For example, Cervical Dystonia (CD) is a prevalent neurological condition causing painful spasticity in one or multiple muscles in the cervical muscle system. Clinicians currently have no method for targeting/monitoring treatment of deep muscles. Automated methods of muscle segmentation would enable clinicians to study, target, and monitor the deep cervical muscles via ultrasound. We have developed a method for segmenting five bilateral cervical muscles and the spine via ultrasound alone, in real-time. Magnetic Resonance Imaging (MRI) and ultrasound data were collected from 22 participants (age: 29.0 ± 6.6, male: 12). To acquire ultrasound muscle segment labels, a novel multimodal registration method was developed, involving MRI image annotation, and shape registration to MRI-matched ultrasound images, via approximation of the tissue deformation. We then applied polynomial regression to transform our annotations and textures into a mean space, before using shape statistics to generate a texture-to-shape dictionary. For segmentation, test images were compared to dictionary textures giving an initial segmentation, and then we used a customized Active Shape Model to refine the fit. Using ultrasound alone, on unseen participants, our technique currently segments a single image in â0.45s to over 86% accuracy (Jaccard index). We propose this approach is applicable generally to segment, extrapolate and visualise deep muscle structure, and analyse statistical features online
Joint classification-regression forests for spatially structured multi-object segmentation.
In many segmentation scenarios, labeled images contain rich structural information about spatial arrangement and shapes of the objects. Integrating this rich information into supervised learning techniques is promising as it generates models which go beyond learning class association, only. This paper proposes a new supervised forest model for joint classification-regression which exploits both class and structural information. Training our model is achieved by optimizing a joint objective function of pixel classification and shape regression. Shapes are represented implicitly via signed distance maps obtained directly from ground truth label maps. Thus, we can associate each image point not only with its class label, but also with its distances to object boundaries, and this at no additional cost regarding annotations. The regression component acts as spatial regularization learned from data and yields a predictor with both class and spatial consistency. In the challenging context of simultaneous multi-organ segmentation, we demonstrate the potential of our approach through experimental validation on a large dataset of 80 three-dimensional CT scans