1,638 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Recommended from our members
Improving Patch-Based Convolutional Neural Networks for MRI Brain Tumor Segmentation by Leveraging Location Information.
The manual brain tumor annotation process is time consuming and resource consuming, therefore, an automated and accurate brain tumor segmentation tool is greatly in demand. In this paper, we introduce a novel method to integrate location information with the state-of-the-art patch-based neural networks for brain tumor segmentation. This is motivated by the observation that lesions are not uniformly distributed across different brain parcellation regions and that a locality-sensitive segmentation is likely to obtain better segmentation accuracy. Toward this, we use an existing brain parcellation atlas in the Montreal Neurological Institute (MNI) space and map this atlas to the individual subject data. This mapped atlas in the subject data space is integrated with structural Magnetic Resonance (MR) imaging data, and patch-based neural networks, including 3D U-Net and DeepMedic, are trained to classify the different brain lesions. Multiple state-of-the-art neural networks are trained and integrated with XGBoost fusion in the proposed two-level ensemble method. The first level reduces the uncertainty of the same type of models with different seed initializations, and the second level leverages the advantages of different types of neural network models. The proposed location information fusion method improves the segmentation performance of state-of-the-art networks including 3D U-Net and DeepMedic. Our proposed ensemble also achieves better segmentation performance compared to the state-of-the-art networks in BraTS 2017 and rivals state-of-the-art networks in BraTS 2018. Detailed results are provided on the public multimodal brain tumor segmentation (BraTS) benchmarks
Brain Tumor Segmentation with Deep Neural Networks
In this paper, we present a fully automatic brain tumor segmentation method
based on Deep Neural Networks (DNNs). The proposed networks are tailored to
glioblastomas (both low and high grade) pictured in MR images. By their very
nature, these tumors can appear anywhere in the brain and have almost any kind
of shape, size, and contrast. These reasons motivate our exploration of a
machine learning solution that exploits a flexible, high capacity DNN while
being extremely efficient. Here, we give a description of different model
choices that we've found to be necessary for obtaining competitive performance.
We explore in particular different architectures based on Convolutional Neural
Networks (CNN), i.e. DNNs specifically adapted to image data.
We present a novel CNN architecture which differs from those traditionally
used in computer vision. Our CNN exploits both local features as well as more
global contextual features simultaneously. Also, different from most
traditional uses of CNNs, our networks use a final layer that is a
convolutional implementation of a fully connected layer which allows a 40 fold
speed up. We also describe a 2-phase training procedure that allows us to
tackle difficulties related to the imbalance of tumor labels. Finally, we
explore a cascade architecture in which the output of a basic CNN is treated as
an additional source of information for a subsequent CNN. Results reported on
the 2013 BRATS test dataset reveal that our architecture improves over the
currently published state-of-the-art while being over 30 times faster
IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION
Techniques for processing and analysing images and medical data have become
the main’s translational applications and researches in clinical and pre-clinical
environments. The advantages of these techniques are the improvement of diagnosis
accuracy and the assessment of treatment response by means of quantitative biomarkers
in an efficient way. In the era of the personalized medicine, an early and
efficacy prediction of therapy response in patients is still a critical issue.
In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high
quality detailed images and excellent soft-tissue contrast, while Computerized
Tomography (CT) images provides attenuation maps and very good hard-tissue
contrast. In this context, Positron Emission Tomography (PET) is a non-invasive
imaging technique which has the advantage, over morphological imaging techniques,
of providing functional information about the patient’s disease.
In the last few years, several criteria to assess therapy response in oncological
patients have been proposed, ranging from anatomical to functional assessments.
Changes in tumour size are not necessarily correlated with changes in tumour
viability and outcome. In addition, morphological changes resulting from therapy
occur slower than functional changes. Inclusion of PET images in radiotherapy
protocols is desirable because it is predictive of treatment response and provides
crucial information to accurately target the oncological lesion and to escalate the
radiation dose without increasing normal tissue injury. For this reason, PET may be
used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the
nature of PET images (low spatial resolution, high noise and weak boundary),
metabolic image processing is a critical task.
The aim of this Ph.D thesis is to develope smart methodologies applied to the
medical imaging field to analyse different kind of problematic related to medical
images and data analysis, working closely to radiologist physicians.
Various issues in clinical environment have been addressed and a certain amount
of improvements has been produced in various fields, such as organs and tissues
segmentation and classification to delineate tumors volume using meshing learning
techniques to support medical decision.
In particular, the following topics have been object of this study:
• Technique for Crohn’s Disease Classification using Kernel Support Vector
Machine Based;
• Automatic Multi-Seed Detection For MR Breast Image Segmentation;
• Tissue Classification in PET Oncological Studies;
• KSVM-Based System for the Definition, Validation and Identification of the
Incisinal Hernia Reccurence Risk Factors;
• A smart and operator independent system to delineate tumours in Positron
Emission Tomography scans;
3
• Active Contour Algorithm with Discriminant Analysis for Delineating
Tumors in Positron Emission Tomography;
• K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor
Volumes;
• Tissue Classification to Support Local Active Delineation of Brain Tumors;
• A fully automatic system of Positron Emission Tomography Study
segmentation.
This work has been developed in collaboration with the medical staff and
colleagues at the:
• Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi
(DIBIMED), University of Palermo
• Cannizzaro Hospital of Catania
• Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale
delle Ricerche (CNR) of CefalĂą
• School of Electrical and Computer Engineering at Georgia Institute of
Technology
The proposed contributions have produced scientific publications in indexed
computer science and medical journals and conferences. They are very useful in
terms of PET and MRI image segmentation and may be used daily as a Medical
Decision Support Systems to enhance the current methodology performed by
healthcare operators in radiotherapy treatments.
The future developments of this research concern the integration of data acquired
by image analysis with the managing and processing of big data coming from a wide
kind of heterogeneous sources
- …