42,078 research outputs found
Quantification of Ultrasonic Texture heterogeneity via Volumetric Stochastic Modeling for Tissue Characterization
Intensity variations in image texture can provide powerful quantitative
information about physical properties of biological tissue. However, tissue
patterns can vary according to the utilized imaging system and are
intrinsically correlated to the scale of analysis. In the case of ultrasound,
the Nakagami distribution is a general model of the ultrasonic backscattering
envelope under various scattering conditions and densities where it can be
employed for characterizing image texture, but the subtle intra-heterogeneities
within a given mass are difficult to capture via this model as it works at a
single spatial scale. This paper proposes a locally adaptive 3D
multi-resolution Nakagami-based fractal feature descriptor that extends
Nakagami-based texture analysis to accommodate subtle speckle spatial frequency
tissue intensity variability in volumetric scans. Local textural fractal
descriptors - which are invariant to affine intensity changes - are extracted
from volumetric patches at different spatial resolutions from voxel
lattice-based generated shape and scale Nakagami parameters. Using ultrasound
radio-frequency datasets we found that after applying an adaptive fractal
decomposition label transfer approach on top of the generated Nakagami voxels,
tissue characterization results were superior to the state of art. Experimental
results on real 3D ultrasonic pre-clinical and clinical datasets suggest that
describing tumor intra-heterogeneity via this descriptor may facilitate
improved prediction of therapy response and disease characterization.Comment: Supplementary data associated with this article can be found, in the
online version, at http://dx.doi.org/10.1016/j.media.2014.12. 00
Radiological images and machine learning: trends, perspectives, and prospects
The application of machine learning to radiological images is an increasingly
active research area that is expected to grow in the next five to ten years.
Recent advances in machine learning have the potential to recognize and
classify complex patterns from different radiological imaging modalities such
as x-rays, computed tomography, magnetic resonance imaging and positron
emission tomography imaging. In many applications, machine learning based
systems have shown comparable performance to human decision-making. The
applications of machine learning are the key ingredients of future clinical
decision making and monitoring systems. This review covers the fundamental
concepts behind various machine learning techniques and their applications in
several radiological imaging areas, such as medical image segmentation, brain
function studies and neurological disease diagnosis, as well as computer-aided
systems, image registration, and content-based image retrieval systems.
Synchronistically, we will briefly discuss current challenges and future
directions regarding the application of machine learning in radiological
imaging. By giving insight on how take advantage of machine learning powered
applications, we expect that clinicians can prevent and diagnose diseases more
accurately and efficiently.Comment: 13 figure
Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network
Alzheimer's Disease (AD) is one of the most concerned neurodegenerative
diseases. In the last decade, studies on AD diagnosis attached great
significance to artificial intelligence (AI)-based diagnostic algorithms. Among
the diverse modality imaging data, T1-weighted MRI and 18F-FDGPET are widely
researched for this task. In this paper, we propose a novel convolutional
neural network (CNN) to fuse the multi-modality information including T1-MRI
and FDG-PDT images around the hippocampal area for the diagnosis of AD.
Different from the traditional machine learning algorithms, this method does
not require manually extracted features, and utilizes the stateof-art 3D
image-processing CNNs to learn features for the diagnosis and prognosis of AD.
To validate the performance of the proposed network, we trained the classifier
with paired T1-MRI and FDG-PET images using the ADNI datasets, including 731
Normal (NL) subjects, 647 AD subjects, 441 stable MCI (sMCI) subjects and 326
progressive MCI (pMCI) subjects. We obtained the maximal accuracies of 90.10%
for NL/AD task, 87.46% for NL/pMCI task, and 76.90% for sMCI/pMCI task. The
proposed framework yields comparative results against state-of-the-art
approaches. Moreover, the experimental results have demonstrated that (1)
segmentation is not a prerequisite by using CNN, (2) the hippocampal area
provides enough information to give a reference to AD diagnosis. Keywords:
Alzheimer's Disease, Multi-modality, Image Classification, CNN, Deep Learning,
HippocampalComment: 21 pages, 5 figures, 9 table
Development and validation of a novel dementia of Alzheimer's type (DAT) score based on metabolism FDG-PET imaging
Fluorodeoxyglucose positron emission tomography (FDG-PET) imaging based 3D
topographic brain glucose metabolism patterns from normal controls (NC) and
individuals with dementia of Alzheimer's type (DAT) are used to train a novel
multi-scale ensemble classification model. This ensemble model outputs a
FDG-PET DAT score (FPDS) between 0 and 1 denoting the probability of a subject
to be clinically diagnosed with DAT based on their metabolism profile. A novel
7 group image stratification scheme is devised that groups images not only
based on their associated clinical diagnosis but also on past and future
trajectories of the clinical diagnoses, yielding a more continuous
representation of the different stages of DAT spectrum that mimics a real-world
clinical setting. The potential for using FPDS as a DAT biomarker was validated
on a large number of FDG-PET images (N=2984) obtained from the Alzheimer's
Disease Neuroimaging Initiative (ADNI) database taken across the proposed
stratification, and a good classification AUC (area under the curve) of 0.78
was achieved in distinguishing between images belonging to subjects on a DAT
trajectory and those images taken from subjects not progressing to a DAT
diagnosis. Further, the FPDS biomarker achieved state-of-the-art performance on
the mild cognitive impairment (MCI) to DAT conversion prediction task with an
AUC of 0.81, 0.80, 0.77 for the 2, 3, 5 years to conversion windows
respectively
Prediction of Progression to Alzheimer's disease with Deep InfoMax
Arguably, unsupervised learning plays a crucial role in the majority of
algorithms for processing brain imaging. A recently introduced unsupervised
approach Deep InfoMax (DIM) is a promising tool for exploring brain structure
in a flexible non-linear way. In this paper, we investigate the use of variants
of DIM in a setting of progression to Alzheimer's disease in comparison with
supervised AlexNet and ResNet inspired convolutional neural networks. As a
benchmark, we use a classification task between four groups: patients with
stable, and progressive mild cognitive impairment (MCI), with Alzheimer's
disease, and healthy controls. Our dataset is comprised of 828 subjects from
the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our
experiments highlight encouraging evidence of the high potential utility of DIM
in future neuroimaging studies.Comment: Accepted to 2019 IEEE Biomedical and Health Informatics (BHI) as a
conference pape
3D Inception-based CNN with sMRI and MD-DTI data fusion for Alzheimer's Disease diagnostics
In the last decade, computer-aided early diagnostics of Alzheimer's Disease
(AD) and its prodromal form, Mild Cognitive Impairment (MCI), has been the
subject of extensive research. Some recent studies have shown promising results
in the AD and MCI determination using structural and functional Magnetic
Resonance Imaging (sMRI, fMRI), Positron Emission Tomography (PET) and
Diffusion Tensor Imaging (DTI) modalities. Furthermore, fusion of imaging
modalities in a supervised machine learning framework has shown promising
direction of research.
In this paper we first review major trends in automatic classification
methods such as feature extraction based methods as well as deep learning
approaches in medical image analysis applied to the field of Alzheimer's
Disease diagnostics. Then we propose our own design of a 3D Inception-based
Convolutional Neural Network (CNN) for Alzheimer's Disease diagnostics. The
network is designed with an emphasis on the interior resource utilization and
uses sMRI and DTI modalities fusion on hippocampal ROI. The comparison with the
conventional AlexNet-based network using data from the Alzheimer's Disease
Neuroimaging Initiative (ADNI) dataset (http://adni.loni.usc.edu) demonstrates
significantly better performance of the proposed 3D Inception-based CNN.Comment: arXiv admin note: substantial text overlap with arXiv:1801.0596
Review on Computer Vision in Gastric Cancer: Potential Efficient Tools for Diagnosis
Rapid diagnosis of gastric cancer is a great challenge for clinical doctors.
Dramatic progress of computer vision on gastric cancer has been made recently
and this review focuses on advances during the past five years. Different
methods for data generation and augmentation are presented, and various
approaches to extract discriminative features compared and evaluated.
Classification and segmentation techniques are carefully discussed for
assisting more precise diagnosis and timely treatment. For classification,
various methods have been developed to better proceed specific images, such as
images with rotation and estimated real-timely (endoscopy), high resolution
images (histopathology), low diagnostic accuracy images (X-ray), poor contrast
images of the soft-tissue with cavity (CT) or those images with insufficient
annotation. For detection and segmentation, traditional methods and machine
learning methods are compared. Application of those methods will greatly reduce
the labor and time consumption for the diagnosis of gastric cancers
Medical Image Generation using Generative Adversarial Networks
Generative adversarial networks (GANs) are unsupervised Deep Learning
approach in the computer vision community which has gained significant
attention from the last few years in identifying the internal structure of
multimodal medical imaging data. The adversarial network simultaneously
generates realistic medical images and corresponding annotations, which proven
to be useful in many cases such as image augmentation, image registration,
medical image generation, image reconstruction, and image-to-image translation.
These properties bring the attention of the researcher in the field of medical
image analysis and we are witness of rapid adaption in many novel and
traditional applications. This chapter provides state-of-the-art progress in
GANs-based clinical application in medical image generation, and cross-modality
synthesis. The various framework of GANs which gained popularity in the
interpretation of medical images, such as Deep Convolutional GAN (DCGAN),
Laplacian GAN (LAPGAN), pix2pix, CycleGAN, and unsupervised image-to-image
translation model (UNIT), continue to improve their performance by
incorporating additional hybrid architecture, has been discussed. Further, some
of the recent applications of these frameworks for image reconstruction, and
synthesis, and future research directions in the area have been covered.Comment: 19 pages, 3 figures, 5 table
A Survey on Deep Learning for Neuroimaging-based Brain Disorder Analysis
Deep learning has been recently used for the analysis of neuroimages, such as
structural magnetic resonance imaging (MRI), functional MRI, and positron
emission tomography (PET), and has achieved significant performance
improvements over traditional machine learning in computer-aided diagnosis of
brain disorders. This paper reviews the applications of deep learning methods
for neuroimaging-based brain disorder analysis. We first provide a
comprehensive overview of deep learning techniques and popular network
architectures, by introducing various types of deep neural networks and recent
developments. We then review deep learning methods for computer-aided analysis
of four typical brain disorders, including Alzheimer's disease, Parkinson's
disease, Autism spectrum disorder, and Schizophrenia, where the first two
diseases are neurodegenerative disorders and the last two are
neurodevelopmental and psychiatric disorders, respectively. More importantly,
we discuss the limitations of existing studies and present possible future
directions.Comment: 30 pages, 7 figure
The Ultrasound Visualization Pipeline - A Survey
Ultrasound is one of the most frequently used imaging modality in medicine.
The high spatial resolution, its interactive nature and non-invasiveness makes
it the first choice in many examinations. Image interpretation is one of
ultrasound's main challenges. Much training is required to obtain a confident
skill level in ultrasound-based diagnostics. State-of-the-art graphics
techniques is needed to provide meaningful visualizations of ultrasound in
real-time. In this paper we present the process-pipeline for ultrasound
visualization, including an overview of the tasks performed in the specific
steps. To provide an insight into the trends of ultrasound visualization
research, we have selected a set of significant publications and divided them
into a technique-based taxonomy covering the topics pre-processing,
segmentation, registration, rendering and augmented reality. For the different
technique types we discuss the difference between ultrasound-based techniques
and techniques for other modalities
- …