29 research outputs found
Anatomy-specific classification of medical images using deep convolutional nets
Automated classification of human anatomy is an important prerequisite for
many computer-aided diagnosis systems. The spatial complexity and variability
of anatomy throughout the human body makes classification difficult. "Deep
learning" methods such as convolutional networks (ConvNets) outperform other
state-of-the-art methods in image classification tasks. In this work, we
present a method for organ- or body-part-specific anatomical classification of
medical images acquired using computed tomography (CT) with ConvNets. We train
a ConvNet, using 4,298 separate axial 2D key-images to learn 5 anatomical
classes. Key-images were mined from a hospital PACS archive, using a set of
1,675 patients. We show that a data augmentation approach can help to enrich
the data set and improve classification performance. Using ConvNets and data
augmentation, we achieve anatomy-specific classification error of 5.9 % and
area-under-the-curve (AUC) values of an average of 0.998 in testing. We
demonstrate that deep learning can be used to train very reliable and accurate
classifiers that could initialize further computer-aided diagnosis.Comment: Presented at: 2015 IEEE International Symposium on Biomedical
Imaging, April 16-19, 2015, New York Marriott at Brooklyn Bridge, NY, US
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Recommended from our members
A Novel Approach for the Visualisation and Progression Tracking of Metastatic Bone Disease
Metastatic bone disease (MBD) is a common secondary feature of cancer that can cause significant complications, including severe pain and death. Current methods of diagnosis require a highly trained radiologist capable of interpreting medical images and recognising the sites of MBD. These medical images are often noisy, two dimensional, greyscale and usually have a poor resolution.
In order to help assist with these issues, several studies have shown that computer aided methods can locate MBD within medical images. However these methods are limited in scope, accuracy, sensitivity, explainability and do not improve upon the poor visualisations of the underlying medical imaging data.
To address these limitations, I have developed a novel method of automatic MBD assessment and visualisation using computed tomography (CT) imaging data as the input. The method is fully automated and does not require any human interaction -- although users can interact with a viewer that visualises the results. This method has been tested on CT data from prostate cancer patients as prostate cancer is one of the most common sources of MBD.
The method described in this thesis has a sensitivity of 0.871 when detecting sclerotic and lytic lesions within a single data set. This sensitivity is comparable to existing methods, however the scope in detecting these lesions was limited to the vertebrae in previous studies. My method significantly expands this scope to include the ribs, vertebrae, pelvis and proximal femurs.
The work in this thesis also provides novel visualisations of the disease and does not suffer from explainability issues that plague modern machine learning algorithms.
In addition, I developed a novel method of tracking the spread of MBD at multiple time points using longitudinal CT data. This method is capable of calculating the change in lesion volume size across multiple time points, providing a novel numerical assessment.The Armstrong Trus
Tumor Segmentation and Classification Using Machine Learning Approaches
Medical image processing has recently developed progressively in terms of methodologies and applications to increase serviceability in health care management. Modern medical image processing employs various methods to diagnose tumors due to the burgeoning demand in the related industry. This study uses the PG-DBCWMF, the HV area method, and CTSIFT extraction to identify brain tumors that have been combined with pancreatic tumors. In terms of efficiency, precision, creativity, and other factors, these strategies offer improved performance in therapeutic settings. The three techniques, PG-DBCWMF, HV region algorithm, and CTSIFT extraction, are combined in the suggested method. The PG-DBCWMF (Patch Group Decision Couple Window Median Filter) works well in the preprocessing stage and eliminates noise. The HV region technique precisely calculates the vertical and horizontal angles of the known images. CTSIFT is a feature extraction method that recognizes the area of tumor images that is impacted. The brain tumor and pancreatic tumor databases, which produce the best PNSR, MSE, and other results, were used for the experimental evaluation