8,453 research outputs found

    PSACNN: Pulse Sequence Adaptive Fast Whole Brain Segmentation

    Full text link
    With the advent of convolutional neural networks~(CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging~(MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1T_1-weighted and T2T_2-weighted contrasts with only T1T_1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results~(overall Dice overlap=0.94=0.94), with a fast run time~(\approx 45 seconds), and consistent across a wide range of acquisition protocols.Comment: Typo in author name corrected. Greves -> Grev

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Automatic PET-CT Image Registration Method Based on Mutual Information and Genetic Algorithms

    Get PDF
    Hybrid PET/CT scanners can simultaneously visualize coronary artery disease as revealed by computed tomography (CT) and myocardial perfusion as measured by positron emission tomography (PET). Manual registration is usually required in clinical practice to compensate spatial mismatch between datasets. In this paper, we present a registration algorithm that is able to automatically align PET/CT cardiac images. The algorithm bases on mutual information (MI) as registration metric and on genetic algorithm as optimization method. A multiresolution approach was used to optimize the processing time. The algorithm was tested on computerized models of volumetric PET/CT cardiac data and on real PET/CT datasets. The proposed automatic registration algorithm smoothes the pattern of the MI and allows it to reach the global maximum of the similarity function. The implemented method also allows the definition of the correct spatial transformation that matches both synthetic and real PET and CT volumetric datasets

    An Integrated Multi-modal Registration Technique for Medical Imaging

    Get PDF
    Registration of medical imaging is essential for aligning in time and space different modalities and hence consolidating their strengths for enhanced diagnosis and for the effective planning of treatment or therapeutic interventions. The primary objective of this study is to develop an integrated registration method that is effective for registering both brain and whole-body images. We seek in the proposed method to combine in one setting the excellent registration results that FMRIB Software Library (FSL) produces with brain images and the excellent results of Statistical Parametric Mapping (SPM) when registering whole-body images. To assess attainment of these objectives, the following registration tasks were performed: (1) FDG_CT with FLT_CT images, (2) pre-operation MRI with intra-operation CT images, (3) brain only MRI with corresponding PET images, and (4) MRI T1 with T2, T1 with FLAIR, and T1 with GE images. Then, the results of the proposed method will be compared to those obtained using existing state-of-the-art registration methods such as SPM and FSL. Initially, three slices were chosen from the reference image, and the normalized mutual information (NMI) was calculated between each of them for every slice in the moving image. The three pairs with the highest NMI values were chosen. The wavelet decomposition method is applied to minimize the computational requirements. An initial search applying a genetic algorithm is conducted on the three pairs to obtain three sets of registration parameters. The Powell method is applied to reference and moving images to validate the three sets of registration parameters. A linear interpolation method is then used to obtain the registration parameters for all remaining slices. Finally, the aligned registered image with the reference image were displayed to show the different performances of the 3 methods, namely the proposed method, SPM and FSL by gauging the average NMI values obtained in the registration results. Visual observations are also provided in support of these NMI values. For comparative purposes, tests using different multi-modal imaging platforms are performed
    corecore