8,453 research outputs found
PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation
With the advent of convolutional neural networks~(CNN), supervised learning
methods are increasingly being used for whole brain segmentation. However, a
large, manually annotated training dataset of labeled brain images required to
train such supervised methods is frequently difficult to obtain or create. In
addition, existing training datasets are generally acquired with a homogeneous
magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such
datasets are unable to generalize on test data with different acquisition
protocols. Modern neuroimaging studies and clinical trials are necessarily
multi-center initiatives with a wide variety of acquisition protocols. Despite
stringent protocol harmonization practices, it is very difficult to standardize
the gamut of MRI imaging parameters across scanners, field strengths, receive
coils etc., that affect image contrast. In this paper we propose a CNN-based
segmentation algorithm that, in addition to being highly accurate and fast, is
also resilient to variation in the input acquisition. Our approach relies on
building approximate forward models of pulse sequences that produce a typical
test image. For a given pulse sequence, we use its forward model to generate
plausible, synthetic training examples that appear as if they were acquired in
a scanner with that pulse sequence. Sampling over a wide variety of pulse
sequences results in a wide variety of augmented training examples that help
build an image contrast invariant model. Our method trains a single CNN that
can segment input MRI images with acquisition parameters as disparate as
-weighted and -weighted contrasts with only -weighted training
data. The segmentations generated are highly accurate with state-of-the-art
results~(overall Dice overlap), with a fast run time~( 45
seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
Recommended from our members
Prediction of progression in idiopathic pulmonary fibrosis using CT scans atbaseline: A quantum particle swarm optimization - Random forest approach
Idiopathic pulmonary fibrosis (IPF) is a fatal lung disease characterized by an unpredictable progressive declinein lung function. Natural history of IPF is unknown and the prediction of disease progression at the time ofdiagnosis is notoriously difficult. High resolution computed tomography (HRCT) has been used for the diagnosisof IPF, but not generally for monitoring purpose. The objective of this work is to develop a novel predictivemodel for the radiological progression pattern at voxel-wise level using only baseline HRCT scans. Mainly, thereare two challenges: (a) obtaining a data set of features for region of interest (ROI) on baseline HRCT scans andtheir follow-up status; and (b) simultaneously selecting important features from high-dimensional space, andoptimizing the prediction performance. We resolved the first challenge by implementing a study design andhaving an expert radiologist contour ROIs at baseline scans, depending on its progression status in follow-upvisits. For the second challenge, we integrated the feature selection with prediction by developing an algorithmusing a wrapper method that combines quantum particle swarm optimization to select a small number of featureswith random forest to classify early patterns of progression. We applied our proposed algorithm to analyzeanonymized HRCT images from 50 IPF subjects from a multi-center clinical trial. We showed that it yields aparsimonious model with 81.8% sensitivity, 82.2% specificity and an overall accuracy rate of 82.1% at the ROIlevel. These results are superior to other popular feature selections and classification methods, in that ourmethod produces higher accuracy in prediction of progression and more balanced sensitivity and specificity witha smaller number of selected features. Our work is the first approach to show that it is possible to use onlybaseline HRCT scans to predict progressive ROIs at 6 months to 1year follow-ups using artificial intelligence
Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms
Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic algorithm as optimization method. A multiresolution approach was used to optimize the processing time. The algorithm was tested on computerized models of volumetric PET/CT cardiac data and on real PET/CT datasets. The proposed automatic registration algorithm smoothes the pattern of the MI and allows it to reach the global maximum of the similarity function. The implemented method also allows the definition of the correct spatial transformation that matches both synthetic and real PET and CT volumetric datasets
An Integrated Multi-modal Registration Technique for Medical Imaging
Registration of medical imaging is essential for aligning in time and space different modalities and hence consolidating their strengths for enhanced diagnosis and for the effective planning of treatment or therapeutic interventions. The primary objective of this study is to develop an integrated registration method that is effective for registering both brain and whole-body images. We seek in the proposed method to combine in one setting the excellent registration results that FMRIB Software Library (FSL) produces with brain images and the excellent results of Statistical Parametric Mapping (SPM) when registering whole-body images. To assess attainment of these objectives, the following registration tasks were performed: (1) FDG_CT with FLT_CT images, (2) pre-operation MRI with intra-operation CT images, (3) brain only MRI with corresponding PET images, and (4) MRI T1 with T2, T1 with FLAIR, and T1 with GE images. Then, the results of the proposed method will be compared to those obtained using existing state-of-the-art registration methods such as SPM and FSL.
Initially, three slices were chosen from the reference image, and the normalized mutual information (NMI) was calculated between each of them for every slice in the moving image. The three pairs with the highest NMI values were chosen. The wavelet decomposition method is applied to minimize the computational requirements. An initial search applying a genetic algorithm is conducted on the three pairs to obtain three sets of registration parameters. The Powell method is applied to reference and moving images to validate the three sets of registration parameters. A linear interpolation method is then used to obtain the registration parameters for all remaining slices. Finally, the aligned registered image with the reference image were displayed to show the different performances of the 3 methods, namely the proposed method, SPM and FSL by gauging the average NMI values obtained in the registration results. Visual observations are also provided in support of these NMI values. For comparative purposes, tests using different multi-modal imaging platforms are performed
- …