2,066 research outputs found

    Ten years of the International Patient Decision Aid Standards Collaboration: evolution of the core dimensions for assessing the quality of patient decision aids

    Get PDF
    In 2003, the International Patient Decision Aid Standards (IPDAS) Collaboration was established to enhance the quality and effectiveness of patient decision aids by establishing an evidence-informed framework for improving their content, development, implementation, and evaluation. Over this 10 year period, the Collaboration has established: a) the background document on 12 core dimensions to inform the original modified Delphi process to establish the IPDAS checklist (74 items); b) the valid and reliable IPDAS instrument (47 items); and c) the IPDAS qualifying (6 items), certifying (6 items + 4 items for screening), and quality criteria (28 items). The objective of this paper is to describe the evolution of the IPDAS Collaboration and discuss the standardized process used to update the background documents on the theoretical rationales, evidence and emerging issues underlying the 12 core dimensions for assessing the quality of patient decision aids. © 2013 Volk et al; licensee BioMed Central Ltd

    Pharmacological risk factors associated with hospital readmission rates in a psychiatric cohort identified using prescriptome data mining

    Get PDF
    Background Worldwide, over 14% of individuals hospitalized for psychiatric reasons have readmissions to hospitals within 30 days after discharge. Predicting patients at risk and leveraging accelerated interventions can reduce the rates of early readmission, a negative clinical outcome (i.e., a treatment failure) that affects the quality of life of patient. To implement individualized interventions, it is necessary to predict those individuals at highest risk for 30-day readmission. In this study, our aim was to conduct a data-driven investigation to find the pharmacological factors influencing 30-day all-cause, intra- and interdepartmental readmissions after an index psychiatric admission, using the compendium of prescription data (prescriptome) from electronic medical records (EMR). Methods The data scientists in the project received a deidentified database from the Mount Sinai Data Warehouse, which was used to perform all analyses. Data was stored in a secured MySQL database, normalized and indexed using a unique hexadecimal identifier associated with the data for psychiatric illness visits. We used Bayesian logistic regression models to evaluate the association of prescription data with 30-day readmission risk. We constructed individual models and compiled results after adjusting for covariates, including drug exposure, age, and gender. We also performed digital comorbidity survey using EMR data combined with the estimation of shared genetic architecture using genomic annotations to disease phenotypes. Results Using an automated, data-driven approach, we identified prescription medications, side effects (primary side effects), and drug-drug interaction-induced side effects (secondary side effects) associated with readmission risk in a cohort of 1275 patients using prescriptome analytics. In our study, we identified 28 drugs associated with risk for readmission among psychiatric patients. Based on prescription data, Pravastatin had the highest risk of readmission (OR = 13.10; 95% CI (2.82, 60.8)). We also identified enrichment of primary side effects (n = 4006) and secondary side effects (n = 36) induced by prescription drugs in the subset of readmitted patients (n = 89) compared to the non-readmitted subgroup (n = 1186). Digital comorbidity analyses and shared genetic analyses further reveals that cardiovascular disease and psychiatric conditions are comorbid and share functional gene modules (cardiomyopathy and anxiety disorder: shared genes (n = 37; P = 1.06815E-06)). Conclusions Large scale prescriptome data is now available from EMRs and accessible for analytics that could improve healthcare outcomes. Such analyses could also drive hypothesis and data-driven research. In this study, we explored the utility of prescriptome data to identify factors driving readmission in a psychiatric cohort. Converging digital health data from EMRs and systems biology investigations reveal a subset of patient populations that have significant comorbidities with cardiovascular diseases are more likely to be readmitted. Further, the genetic architecture of psychiatric illness also suggests overlap with cardiovascular diseases. In summary, assessment of medications, side effects, and drug-drug interactions in a clinical setting as well as genomic information using a data mining approach could help to find factors that could help to lower readmission rates in patients with mental illness

    Fast PCA for processing calcium-imaging data from the brain of Drosophila melanogaster

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The calcium-imaging technique allows us to record movies of brain activity in the antennal lobe of the fruitfly <it>Drosophila melanogaster</it>, a brain compartment dedicated to information about odors. Signal processing, e.g. with source separation techniques, can be slow on the large movie datasets.</p> <p>Method</p> <p>We have developed an approximate Principal Component Analysis (PCA) for fast dimensionality reduction. The method samples relevant pixels from the movies, such that PCA can be performed on a smaller matrix. Utilising <it>a priori </it>knowledge about the nature of the data, we minimise the risk of missing important pixels.</p> <p>Results</p> <p>Our method allows for fast approximate computation of PCA with adaptive resolution and running time. Utilising <it>a priori </it>knowledge about the data enables us to concentrate more biological signals in a small pixel sample than a general sampling method based on vector norms.</p> <p>Conclusions</p> <p>Fast dimensionality reduction with approximate PCA removes a computational bottleneck and leads to running time improvements for subsequent algorithms. Once in PCA space, we can efficiently perform source separation, e.g to detect biological signals in the movies or to remove artifacts.</p

    A method for discovering and inferring appropriate eligibility criteria in clinical trial protocols without labeled data

    Get PDF
    BACKGROUND: We consider the user task of designing clinical trial protocols and propose a method that discovers and outputs the most appropriate eligibility criteria from a potentially huge set of candidates. Each document d in our collection D is a clinical trial protocol which itself contains a set of eligibility criteria. Given a small set of sample documents [Formula: see text] , a user has initially identified as relevant e.g., via a user query interface, our scoring method automatically suggests eligibility criteria from D, D ⊃ D', by ranking them according to how appropriate they are to the clinical trial protocol currently being designed. The appropriateness is measured by the degree to which they are consistent with the user-supplied sample documents D'. METHOD: We propose a novel three-step method called LDALR which views documents as a mixture of latent topics. First, we infer the latent topics in the sample documents using Latent Dirichlet Allocation (LDA). Next, we use logistic regression models to compute the probability that a given candidate criterion belongs to a particular topic. Lastly, we score each criterion by computing its expected value, the probability-weighted sum of the topic proportions inferred from the set of sample documents. Intuitively, the greater the probability that a candidate criterion belongs to the topics that are dominant in the samples, the higher its expected value or score. RESULTS: Our experiments have shown that LDALR is 8 and 9 times better (resp., for inclusion and exclusion criteria) than randomly choosing from a set of candidates obtained from relevant documents. In user simulation experiments using LDALR, we were able to automatically construct eligibility criteria that are on the average 75% and 70% (resp., for inclusion and exclusion criteria) similar to the correct eligibility criteria. CONCLUSIONS: We have proposed LDALR, a practical method for discovering and inferring appropriate eligibility criteria in clinical trial protocols without labeled data. Results from our experiments suggest that LDALR models can be used to effectively find appropriate eligibility criteria from a large repository of clinical trial protocols

    A Logitudinal Feature Selection Method Identifies Relevant Genes to Distinguish Complicated Injury and Uncomplicated Injury Over Time

    Get PDF
    Background: Feature selection and gene set analysis are of increasing interest in the field of bioinformatics. While these two approaches have been developed for different purposes, we describe how some gene set analysis methods can be utilized to conduct feature selection. Methods: We adopted a gene set analysis method, the significance analysis of microarray gene set reduction (SAMGSR) algorithm, to carry out feature selection for longitudinal gene expression data. Results: Using a real-world application and simulated data, it is demonstrated that the proposed SAMGSR extension outperforms other relevant methods. In this study, we illustrate that a gene’s expression profiles over time can be regarded as a gene set and then a suitable gene set analysis method can be utilized directly to select relevant genes associated with the phenotype of interest over time. Conclusions: We believe this work will motivate more research to bridge feature selection and gene set analysis, with the development of novel algorithms capable of carrying out feature selection for longitudinal gene expression data

    Data mining of audiology patient records: factors influencing the choice of hearing aid type

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper describes the analysis of a database of over 180,000 patient records, collected from over 23,000 patients, by the hearing aid clinic at James Cook University Hospital in Middlesbrough, UK. These records consist of audiograms (graphs of the faintest sounds audible to the patient at six different pitches), categorical data (such as age, gender, diagnosis and hearing aid type) and brief free text notes made by the technicians. This data is mined to determine which factors contribute to the decision to fit a BTE (worn behind the ear) hearing aid as opposed to an ITE (worn in the ear) hearing aid.</p> <p>Methods</p> <p>From PCA (principal component analysis) four main audiogram types are determined, and are related to the type of hearing aid chosen. The effects of age, gender, diagnosis, masker, mould and individual audiogram frequencies are combined into a single model by means of logistic regression. Some significant keywords are also discovered in the free text fields by using the chi-squared (χ<sup>2</sup>) test, which can also be used in the model. The final model can act a decision support tool to help decide whether an individual patient should be offered a BTE or an ITE hearing aid.</p> <p>Results</p> <p>The final model was tested using 5-fold cross validation, and was able to replicate the decisions of audiologists whether to fit an ITE or a BTE hearing aid with precision in the range 0.79 to 0.87.</p> <p>Conclusions</p> <p>A decision support system was produced to predict the type of hearing aid which should be prescribed, with an explanation facility explaining how that decision was arrived at. This system should prove useful in providing a "second opinion" for audiologists.</p
    • …
    corecore