6 research outputs found

    The Development and Usability Testing of a Decision Support Mobile App for the Essential Care for Every Baby (ECEB) Program

    Get PDF
    mHealth is a pervasive and ubiquitous technology which has revolutionized the healthcare system for both health providers and patients (Wang et al. 2016). Each year, globally, about 15 million babies are born too soon (premature) or too small (low birthweight small for gestational age); among these 2.7 million newborns die every year due to complications from prematurity (Every New Born 2014). Common complications of prematurity like feeding problems, and hypothermia lead to high rates of morbidity and mortality among prematurely born babies each year. Delivery of evidence-based essential newborn care interventions, from birth through the first 24 h of postnatal life, has been shown to improve health and well-being, and reduce mortality, among newborns. However, due to a variety of barriers, bottlenecks, and challenges, many babies born in resource-limited settings do not receive the full complement of these lifesaving interventions. In order to address these challenges, the American Academy of Pediatrics (AAP) has developed an integrated educational and training curriculm for health care providers and family stakeholders in LMICs called Essential Care for Every Baby (ECEB). ECEB has an Action Plan, which serves as a decision support tool and job aid for health care providers. (Figure 1), by synthesizing research over a decade on helping babies survive (Essential Care for Every Baby 2018). This program teaches health care providers essential newborn care practices to keep all babies healthy from the time of birth to discharge from the facility. Yet, the nuances of monitoring, tracking and taking care of multiple babies simultaneously in neonatal wards has a big cognitive load on nurses, who must perform tasks every few minutes on each baby. The care is divided into three phases based on the time after birth: Phase 1 (0–60 min), Phase 2 (60–90 min), Phase 3 (90 min-24 h). We iteratively developed and tested the usability of the ECEB action plan, as part of the mobile Helping Babies Survive (mHBS) suite of apps, and plan to field test the app in the near future

    Natural language processing of MIMIC-III clinical notes for identifying diagnosis and procedures with neural networks

    Get PDF
    Coding diagnosis and procedures in medical records is a crucial process in the healthcare industry, which includes the creation of accurate billings, receiving reimbursements from payers, and creating standardized patient care records. In the United States, Billing and Insurance related activities cost around $471 billion in 2012 which constitutes about 25% of all the U.S hospital spending. In this paper, we report the performance of a natural language processing model that can map clinical notes to medical codes, and predict final diagnosis from unstructured entries of history of present illness, symptoms at the time of admission, etc. Previous studies have demonstrated that deep learning models perform better at such mapping when compared to conventional machine learning models. Therefore, we employed state-of-the-art deep learning method, ULMFiT on the largest emergency department clinical notes dataset MIMIC III which has 1.2M clinical notes to select for the top-10 and top-50 diagnosis and procedure codes. Our models were able to predict the top-10 diagnoses and procedures with 80.3% and 80.5% accuracy, whereas the top-50 ICD-9 codes of diagnosis and procedures are predicted with 70.7% and 63.9% accuracy. Prediction of diagnosis and procedures from unstructured clinical notes benefit human coders to save time, eliminate errors and minimize costs. With promising scores from our present model, the next step would be to deploy this on a small-scale real-world scenario and compare it with human coders as the gold standard. We believe that further research of this approach can create highly accurate predictions that can ease the workflow in a clinical setting.Comment: This is a shortened version of the Capstone Project that was accepted by the Faculty of Indiana University, in partial fulfillment of the requirements for the degree of Master of Science in Health Informatics in Dec 201

    Phronesis of AI in radiology: Superhuman meets natural stupidity

    Get PDF
    Advances in AI in the last decade have clearly made economists, politicians, journalists, and citizenry in general believe that the machines are coming to take human jobs. We review 'superhuman' AI performance claims in radiology and then provide a self-reflection on our own work in the area in the form of a critical review, a tribute of sorts to McDermotts 1976 paper, asking the field for some self-discipline. Clearly there is an opportunity to replace humans, but there are better opportunities, as we have discovered to fit cognitive abilities of human and non-humans. We performed one of the first studies in radiology to see how human and AI performance can complement and improve each others performance for detecting pneumonia in chest X-rays. We question if there is a practical wisdom or phronesis that we need to demonstrate in AI today as well as in our field. Using this, we articulate what AI as a field has already and probably can in the future learn from Psychology, Cognitive Science, Sociology and Science and Technology Studies

    A Prospective study on the assessment of risk factors for type 2 diabetes mellitus in outpatients department of a south Indian tertiary care hospital: A case-control study

    Get PDF
    Background: Type 2 diabetes mellitus (T2DM) is the most general type of diabetes. In India, the risk factors (modifiable and nonmodifiable) for diabetes are seen more frequently and there is lack of perception about this problem.Objective: The objective of the study was to assess the incidence and risk factors for T2DM in a south Indian tertiary care hospital.Materials and Methods: A prospective study was conducted on 1161 subjects (with or without T2DM) from November 2014 to April 2015 in general medicine department of Dr. Pinnamaneni Siddhartha Institute of Medical Sciences and Research Foundation, Andhra Pradesh, south India. Chi-square test was used to evaluate the incidence of T2DM and odds ratios were calculated in univariate logistic regression analysis for risk factors.Results: T2DM was significantly higher in the subjects of age above 41 years (86.3%, P<0.0001), married (95.4%, P=0.002), educators (degree and above, 13.2%, P<0.0001), known family history (50.8%, P<0.0001), BMI (>25 kg/m2,58.7%; P<0.0001), Govt. job holders (5.5%, P<0.0001), business people (12%, P<0.0001), house wives (38.3%, P<0.0001), high economic status (34.9%, P<0.0004), preexisting hypertension (40.2%, P<0.0001), urban residence (50.4%, P<0.0001), physical inactivity (45.3%, P<0.001), stress (61.0%, P=0.01), consumption of tea and coffee (daily thrice or more, 6.3%, P=0.0003), soft drinks (weekly thrice or more, 4%, P=0.0008) and junk foods (weekly thrice or more 2.6%, P=0.025) than non-diabetic subjects. Univariate logistic regression analysis showed that the age (above 41 years), marital status, education, family history, BMI (>25 kg/m2), high economic status, co-morbidities (hypertension and thyroid disorders) urban residence, physical inactivity, stress, consumption of tea and coffee (daily thrice or more), soft drinks (weekly thrice or more) and junk foods are the significantly risk factors for T2DM.Conclusion: The present study results suggested that beware of hypertension, thyroids disorders, physical inactivity, stress, soft drinks and junk foods, which are major risk factors of T2DM.Â

    Phronesis of AI in radiology: Superhuman meets natural stupidity

    Get PDF
    Advances in AI in the last decade have clearly made economists, politicians, journalists, and citizenry in general believe that the machines are coming to take human jobs. We review 'superhuman' AI performance claims in radiology and then provide a self-reflection on our own work in the area in the form of a critical review, a tribute of sorts to McDermotts 1976 paper, asking the field for some self-discipline. Clearly there is an opportunity to replace humans, but there are better opportunities, as we have discovered to fit cognitive abilities of human and non-humans. We performed one of the first studies in radiology to see how human and AI performance can complement and improve each others performance for detecting pneumonia in chest X-rays. We question if there is a practical wisdom or phronesis that we need to demonstrate in AI today as well as in our field. Using this, we articulate what AI as a field has already and probably can in the future learn from Psychology, Cognitive Science, Sociology and Science and Technology Studies

    Evaluating the Implementation of Deep Learning in LibreHealth Radiology on Chest X-Rays

    No full text
    Respiratory diseases are the dominant cause of deaths worldwide. In the US, the number of deaths due to chronic lung infections (mostly pneumonia and tuberculosis), lung cancer and chronic obstructive pulmonary disease has increased. Timely and accurate diagnosis of the disease is highly imperative to diminish the deaths. Chest X-ray is a vital diagnostic tool used for diagnosing lung diseases. Delay in X-Ray diagnosis is run-of-the-mill milieu and the reasons for the impediment are mostly because the X-ray reports are arduous to interpret, due to the complex visual contents of radiographs containing superimposed anatomical structures. A shortage of trained radiologists is another cause of increased workload and thus delay. We integrated CheXNet, a neural network algorithm into the LibreHealth Radiology Information System, which allows physicians to upload Chest X-rays and identify diagnosis probabilities. The uploaded images are evaluated from labels for 14 thoracic diseases. The turnaround time for each evaluation is about 30 seconds, which does not affect clinical workflow. A Python Flask application hosted web service is used to upload radiographs into a GPU server containing the algorithm. Thus, the use of this system is not limited to clients having their GPU server, but instead, we provide a web service. To evaluate the model, we randomly split the dataset into training (70%), validation (10%) and test (20%) sets. With over 86% accuracy and turnaround time under 30 seconds, the application demonstrates the feasibility of a web service for machine learning based diagnosis of 14-lung pathologies from Chest X-rays
    corecore