3 research outputs found

    Automated Pneumothorax Diagnosis using Deep Neural Networks

    Get PDF
    Thoracic ultrasound can provide information leading to rapid diagnosis of pneumothorax with improved accuracy over the standard physical examination and with higher sensitivity than anteroposterior chest radiography. However, the clinical We have Furthermore, remote environments, such as the battlefield or deep-space exploration, may lack expertise for diagnosing developed an automated image interpretation pipeline for the analysis of thoracic ultrasound data and the classification of pneumothorax events to provide decision support in such situations. Our pipeline consists of image preprocessing, data augmentation, and deep learning architectures for medical diagnosis. In this work, we demonstrate that robust, accurate interpretation of chest images and video can be achieved using deep neural networks. A number of novel image processing techniques were employed to achieve this result. Affine transformations were applied for data augmentation. Hyperparameters were optimized for learning rate, dropout regularization, batch size, and epoch iteration by a sequential model-based Bayesian approach. In addition, we utilized pretrained architecturesinterpretation of a patient medical image is highly operator dependent. certain pathologies., applying transfer learning and fine-tuning techniques to fully connected layers. Our pipeline yielded binary classification validation accuracies of 98.3% for M-mode images and 99.8% with B-mode video frames

    Medics: Medical Decision Support System for Long-Duration Space Exploration

    Get PDF
    The Autonomous Medical Operations (AMO) group at NASA Ames is developing a medical decision support system to enable astronauts on long-duration exploration missions to operate autonomously. The system will support clinical actions by providing medical interpretation advice and procedural recommendations during emergent care and clinical work performed by crew. The current state of development of the system, called MedICS (Medical Interpretation Classification and Segmentation) includes two separate aspects: a set of machine learning diagnostic models trained to analyze organ images and patient health records, and an interface to ultrasound diagnostic hardware and to medical repositories. Three sets of images of different organs and medical records were utilized for training machine learning models for various analyses, as follows: 1. Pneumothorax condition (collapsed lung). The trained model provides a positive or negative diagnosis of the condition. 2. Carotid artery occlusion. The trained model produces a diagnosis of 5 different occlusion levels (including normal). 3. Ocular retinal images. The model extracts optic disc pixels (image segmentation). This is a precursor step for advanced autonomous fundus clinical evaluation algorithms to be implemented in FY20. 4. Medical health records. The model produces a differential diagnosis for any particular individual, based on symptoms and other health and demographic information. A probability is calculated for each of 25 most common conditions. The same model provides the likelihood of survival. All results are provided with a confidence level. Item 1 images were provided by the US Army and were part of a data set for the clinical treatment of injured battlefield soldiers. This condition is relevant to possible space mishaps, due to pressure management issues. Item 2 images were provided by Houston Methodist Hospital, and item 3 health records were acquired from the MIT laboratory of computational physiology. The machine learning technology utilized is deep multilayer networks (Deep Learning), and new models will continue to be produced, as relevant data is made available and specific health needs of astronaut crews are identified. The interfacing aspects of the system include a GUI for running the different models, and retrieving and storing data, as well as support for integration with an augmented reality (AR) system deployed at JSC by Tietronix Software Inc. (HoloLens). The AR system provides guidance for the placement of an ultrasound transducer that captures images to be sent to the MedICS system for diagnosis. The image captured and the associated diagnosis appear in the technicians AR visual display
    corecore