20,487 research outputs found

    Assessing the accuracy of an inter-institutional automated patient-specific health problem list

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Health problem lists are a key component of electronic health records and are instrumental in the development of decision-support systems that encourage best practices and optimal patient safety. Most health problem lists require initial clinical information to be entered manually and few integrate information across care providers and institutions. This study assesses the accuracy of a novel approach to create an inter-institutional automated health problem list in a computerized medical record (MOXXI) that integrates three sources of information for an individual patient: diagnostic codes from medical services claims from all treating physicians, therapeutic indications from electronic prescriptions, and single-indication drugs.</p> <p>Methods</p> <p>Data for this study were obtained from 121 general practitioners and all medical services provided for 22,248 of their patients. At the opening of a patient's file, all health problems detected through medical service utilization or single-indication drug use were flagged to the physician in the MOXXI system. Each new arising health problem were presented as 'potential' and physicians were prompted to specify if the health problem was valid (Y) or not (N) or if they preferred to reassess its validity at a later time.</p> <p>Results</p> <p>A total of 263,527 health problems, representing 891 unique problems, were identified for the group of 22,248 patients. Medical services claims contributed to the majority of problems identified (77%), followed by therapeutic indications from electronic prescriptions (14%), and single-indication drugs (9%). Physicians actively chose to assess 41.7% (n = 106,950) of health problems. Overall, 73% of the problems assessed were considered valid; 42% originated from medical service diagnostic codes, 11% from single indication drugs, and 47% from prescription indications. Twelve percent of problems identified through other treating physicians were considered valid compared to 28% identified through study physician claims.</p> <p>Conclusion</p> <p>Automation of an inter-institutional problem list added over half of all validated problems to the health problem list of which 12% were generated by conditions treated by other physicians. Automating the integration of existing information sources provides timely access to accurate and relevant health problem information. It may also accelerate the uptake and use of electronic medical record systems.</p

    Automated Detection of Systematic Off-label Drug Use in Free Text of Electronic Medical Records.

    Get PDF
    Off-label use of a drug occurs when it is used in a manner that deviates from its FDA label. Studies estimate that 21% of prescriptions are off-label, with only 27% of those uses supported by evidence of safety and efficacy. We have developed methods to detect population level off-label usage using computationally efficient annotation of free text from clinical notes to generate features encoding empirical information about drug-disease mentions. By including additional features encoding prior knowledge about drugs, diseases, and known usage, we trained a highly accurate predictive model that was used to detect novel candidate off-label usages in a very large clinical corpus. We show that the candidate uses are plausible and can be prioritized for further analysis in terms of safety and efficacy

    Deep Learning for the Radiographic Detection of periodontal Bone Loss

    Get PDF
    We applied deep convolutional neural networks (CNNs) to detect periodontal bone loss (PBL) on panoramic dental radiographs. We synthesized a set of 2001 image segments from panoramic radiographs. Our reference test was the measured % of PBL. A deep feed-forward CNN was trained and validated via 10-times repeated group shuffling. Model architectures and hyperparameters were tuned using grid search. The final model was a seven-layer deep neural network, parameterized by a total number of 4,299,651 weights. For comparison, six dentists assessed the image segments for PBL. Averaged over 10 validation folds the mean (SD) classification accuracy of the CNN was 0.81 (0.02). Mean (SD) sensitivity and specificity were 0.81 (0.04), 0.81 (0.05), respectively. The mean (SD) accuracy of the dentists was 0.76 (0.06), but the CNN was not statistically significant superior compared to the examiners (p = 0.067/t-test). Mean sensitivity and specificity of the dentists was 0.92 (0.02) and 0.63 (0.14), respectively. A CNN trained on a limited amount of radiographic image segments showed at least similar discrimination ability as dentists for assessing PBL on panoramic radiographs. Dentists’ diagnostic efforts when using radiographs may be reduced by applying machine-learning based technologies

    A method to quantify residents\u27 jargon use during counseling of standardized patients about cancer screening

    Get PDF
    Background Jargon is a barrier to effective patient-physician communication, especially when health literacy is low or the topic is complicated. Jargon is addressed by medical schools and residency programs, but reducing jargon usage by the many physicians already in practice may require the population-scale methods used in Quality Improvement. Objective To assess the amount of jargon used and explained during discussions about prostate or breast cancer screening. Effective communication is recommended before screening for prostate or breast cancer because of the large number of false-positive results and the possible complications from evaluation or treatment. Participants Primary care internal medicine residents. Measurements Transcripts of 86 conversations between residents and standardized patients were abstracted using an explicit-criteria data dictionary. Time lag from jargon words to explanations was measured using “statements,” each of which contains one subject and one predicate. Results Duplicate abstraction revealed reliability κ = 0.92. The average number of unique jargon words per transcript was 19.6 (SD = 6.1); the total jargon count was 53.6 (SD = 27.2). There was an average of 4.5 jargon-explanations per transcript (SD = 2.3). The ratio of explained to total jargon was 0.15. When jargon was explained, the average time lag from the first usage to the explanation was 8.4 statements (SD = 13.4). Conclusions The large number of jargon words and low number of explanations suggest that many patients may not understand counseling about cancer screening tests. Educational programs and faculty development courses should continue to discourage jargon usage. The methods presented here may be useful for feedback and quality improvement efforts
    • …
    corecore