30 research outputs found
Automated Assessment of Cardiothoracic Ratios on Chest Radiographs Using Deep Learning
Introduction: The cardiothoracic ratio (CTR) is a quantitative measure of cardiac size that can measured from chest radiography (CXR). Although radiologists using digital workstations possess the ability to calculate CTR, clinical demands prevent calculation for every case. In this study, the efficacy of a deep convolutional neural network (dCNN) to assess CTR was evaluated.
Methods: 611 HIPAA-compliant de-identified CXRs were obtained from [institution blinded] and public databases. Using ImageJ, a board-certified radiologist (reader #1) and a medical student (reader #2), measured the CTR by marking four pixels on all CXRs: the right- and left-most chest wall, the right- and left-most heart border.
The Tensorflow framework (v2.0, Google LLC, Mountain View, CA) and the Keras library (v2.3, https://keras.io) were used to train the dCNN. The images were split into training (511 images), validation (50 images), and test (50 images). U-Net network architecture with an Intersection over Union loss function was employed to predict oval masks on new CXRs and calculate the CTR.
Results: 45 test cases were analyzed. The mean absolute difference in the calculated CTR was 0.026 (stdev: 0.039) for reader 1 vs dCNN, 0.024 (stdev: 0.039) for reader 2 vs. dCNN, and 0.022 (stdev: 0.024) for reader 1 vs. reader 2. The intra-class correlation coefficient was 0.84 (95% CI: 0.73-0.91), 0.84 (95% CI: 0.72-0.91), 0.92 (95% CI: 0.822-0.958) for reader 1 vs. dCNN, reader 2 vs. dCNN, and reader 1 vs. reader 2, respectively.
Discussion: The dCNN trained in this study outputted similar CTR measurements to the human readers with the dCNN achieving good reliability with the human readers and the human readers achieving excellent reliability among themselves. This study proves the feasibility of using a dCNN to perform automated CTR assessment from CXR. Future improvements to the algorithm can allow the dCNN to closely approach the expected limits of inter-observer human agreement
3D Convolutional Neural Networks for the diagnosis of 6 unique pathologies on head CT
Introduction: Head CT scans are a standard first-line tool used by physicians in the diagnosis of neurological pathologies. Recently, the development of deep learning models such as convolutional neural networks (CNNs) has allowed the rapid identification of bleeds and other pathologies on CT scans. This study aims to show that by training 3D CNNs with a larger, curated dataset, a more comprehensive list of potential diagnoses can be included in the detailed model.
Methods: A retrospective study was performed using a dataset of 66,000 head CT studies from the Thomas Jefferson University health system. Studies were acquired using a natural language processor that searched for 60 different diagnoses, and the scans were then grouped into six distinct classes. Images were preprocessed by converting CT Hounsfield Units to greyscale, cropping to remove negative area, normalizing pixel values, and resizing to fit the input dimensions of the neural network. To automatically classify the studies, a three-dimensional residual neural network (3D-ResNet), was trained using 80% of the dataset as a training set and 20% of the dataset as a test set.
Results: To achieve the most accurate results, a 3D-ResNet with 34 residual layers was used. Following the training of the 3D-RESNET, the model achieved an accuracy of 0.47 on the test set and 0.915 on the training set.
Discussion: These results show a promising initial step toward machine-assisted diagnosis of head CT scans. As more potential diagnoses are added to models, the utility of the models increases, and more studies can be quickly processed. Going forward, neural networks could potentially be used to prioritize radiology worklists and perform automatic diagnosis of urgent scans
Assessment of Dobhoff Tube Malposition on Radiographs Using Deep Learning
Introduction: Dobhoff tubes (DHT) are narrow-bore flexible devices that deliver enteral nutrition for critically ill patients. Tracheobronchial insertion of DHTs presents a significant risk for pulmonary complications. Thus, DHT insertion requires radiologist confirmation of correct placement with chest x-ray (CXR), increasing clinical delays. To address this, we demonstrate the novel application of Deep Convolutional Neural Networks (DCNNs) to automatically and accurately identify DHTs in CXRs in real time.
Methods: 141 de-identified HIPAA compliant frontal view chest radiographs containing DHTs in various positions were obtained. The DHTs were first manually segmented and verified by a board certified radiologist. Images were split into training (126) and test (15) sets. Data augmentation consisted of horizontal flipping, rotation, sheer, and translation steps. A pretrained deep convolutional neural network model with the U-Net architecture was employed. This net was trained using TensorFlow 2.0 and a 1080ti NVIDIA GPU. The training ran for 300 epochs with an Adam optimizer (learning rate = 0.0001), using an intersection over union (IOU) loss function.
Results: The fully trained network achieved a Sørensen–Dice coefficient of 0.7 between the predicted and ground truth segmentations. This suggests that the DCNN was able to identify DHT both accurately and in a variety of use cases. Run time per image was less than a second, demonstrating the efficiency of this computer-based method.
Discussion: A Dice coefficient of 0.7 represents strong accuracy and supports the hypothesis that DCNN may be employed to automatically identify DHT positioning. This suggests that deep learning can segment and highlight DHTs, potentially aiding clinical teams. Performance could improve with more training cases and standardization of preprocessing. Future directions include research on the real world impact of such solutions on clinical teams, including whether such a system improves safe DHT placement outcomes on floors
Is Android or iPhone the Platform for Innovation in Imaging Informatics
It is clear that ubiquitous mobile computing platforms will be a disruptive technology in the delivery of healthcare in the near future. While radiologists are fairly sedentary, their customers, the referring physicians, and the patients are not. The need for closer collaboration and interaction with referring physicians is seen as a key to maintaining relationships and integrating tightly with the patient management team. While today, patients have to settle for their images on a CD, in short time, they will be taking them home on their cell phone. As PACS vendors are moving ever outward in the enterprise, they are already actively developing clients on mobile platforms. Two major contenders are the Apple’s iPhone and the Android platform developed by Google. These two designs represent two entirely different architectures and business models
Generalization of Artificial Intelligence Models in Medical Imaging: A Case-Based Review
The discussions around Artificial Intelligence (AI) and medical imaging are
centered around the success of deep learning algorithms. As new algorithms
enter the market, it is important for practicing radiologists to understand the
pitfalls of various AI algorithms. This entails having a basic understanding of
how algorithms are developed, the kind of data they are trained on, and the
settings in which they will be deployed. As with all new technologies, use of
AI should be preceded by a fundamental understanding of the risks and benefits
to those it is intended to help. This case-based review is intended to point
out specific factors practicing radiologists who intend to use AI should
consider
Automated Detection of Radiology Reports that Document Non-routine Communication of Critical or Significant Results
The purpose of this investigation is to develop an automated method to accurately detect radiology reports that indicate non-routine communication of critical or significant results. Such a classification system would be valuable for performance monitoring and accreditation. Using a database of 2.3 million free-text radiology reports, a rule-based query algorithm was developed after analyzing hundreds of radiology reports that indicated communication of critical or significant results to a healthcare provider. This algorithm consisted of words and phrases used by radiologists to indicate such communications combined with specific handcrafted rules. This algorithm was iteratively refined and retested on hundreds of reports until the precision and recall did not significantly change between iterations. The algorithm was then validated on the entire database of 2.3 million reports, excluding those reports used during the testing and refinement process. Human review was used as the reference standard. The accuracy of this algorithm was determined using precision, recall, and F measure. Confidence intervals were calculated using the adjusted Wald method. The developed algorithm for detecting critical result communication has a precision of 97.0% (95% CI, 93.5–98.8%), recall 98.2% (95% CI, 93.4–100%), and F measure of 97.6% (ß = 1). Our query algorithm is accurate for identifying radiology reports that contain non-routine communication of critical or significant results. This algorithm can be applied to a radiology reports database for quality control purposes and help satisfy accreditation requirements