18 research outputs found

    Robotic proctectomy for rectal cancer: analysis of 71 patients from a single institution

    Full text link
    BackgroundDespite increasing use of robotic surgery for rectal cancer, few series have been published from the practice of generalizable US surgeons.MethodsA retrospective chart review was performed for 71 consecutive patients who underwent robotic low anterior resection (LAR) or abdominoperineal resection (APR) for rectal adenocarcinoma between 2010 and 2014.Results46 LARs (65%) and 25 APRs (35%) were identified. Median procedure time was 219 minutes (IQR 184–275) and mean blood loss 164.9 cc (SD 155.9 cc). Radial margin was negative in 70/71 (99%) patients. Total mesorectal excision integrity was complete/near complete in 38/39 (97%) of graded specimens. A mean of 16.8 (SD+/− 8.9) lymph nodes were retrieved. At median follow‐up of 21.9 months, there were no local recurrences.ConclusionsRobotic proctectomy for rectal cancer was introduced into typical colorectal surgery practice by a single surgeon, with a low conversion rate, low complication rate, and satisfactory oncologic outcomes.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/139933/1/rcs1841_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/139933/2/rcs1841.pd

    An eHARS Dashboard for State HIV Surveillance

    Get PDF
    State HIV offices routinely produce fact sheets, epidemiologic profiles, and other reports from the eHARS (Enhanced HIV/AIDS Reporting System) database. The eHARS software is used throughout the United States and has limited variability between states. Due to this limited variability, software developed to analyze and visualize data using the eHARS database schema may be useful to many state HIV offices. The R software environment was used to create a powerful data dashboard for the eHARS database schema. The eharsDash package contains software which imports data from eHARS into the R environment and analyzes and visualizes the data

    Assessing document section heterogeneity across multiple electronic health record systems for computational phenotyping: A case study of heart-failure phenotyping algorithm.

    No full text
    BackgroundThe incorporation of information from clinical narratives is critical for computational phenotyping. The accurate interpretation of clinical terms highly depends on their associated context, especially the corresponding clinical section information. However, the heterogeneity across different Electronic Health Record (EHR) systems poses challenges in utilizing the section information.ObjectivesLeveraging the eMERGE heart failure (HF) phenotyping algorithm, we assessed the heterogeneity quantitatively through the performance comparison of machine learning (ML) classifiers which map clinical sections containing HF-relevant terms across different EHR systems to standard sections in Health Level 7 (HL7) Clinical Document Architecture (CDA).MethodsWe experimented with both random forest models with sentence-embedding features and bidirectional encoder representations from transformers models. We trained MLs using an automated labeled corpus from an EHR system that adopted HL7 CDA standard. We assessed the performance using a blind test set (n = 300) from the same EHR system and a gold standard (n = 900) manually annotated from three other EHR systems.ResultsThe F-measure of those ML models varied widely (0.00-0.91%), indicating MLs with one tuning parameter set were insufficient to capture sections across different EHR systems. The error analysis indicates that the section does not always comply with the corresponding standardized sections, leading to low performance.ConclusionsWe presented the potential use of ML techniques to map the sections containing HF-relevant terms in multiple EHR systems to standard sections. However, the findings suggested that the quality and heterogeneity of section structure across different EHRs affect applications due to the poor adoption of documentation standards

    Identification of delirium from real-world electronic health record clinical notes

    Get PDF
    Abstract Introduction: We tested the ability of our natural language processing (NLP) algorithm to identify delirium episodes in a large-scale study using real-world clinical notes. Methods: We used the Rochester Epidemiology Project to identify persons ≥ 65 years who were hospitalized between 2011 and 2017. We identified all persons with an International Classification of Diseases code for delirium within ±14 days of a hospitalization. We independently applied our NLP algorithm to all clinical notes for this same population. We calculated rates using number of delirium episodes as the numerator and number of hospitalizations as the denominator. Rates were estimated overall, by demographic characteristics, and by year of episode, and differences were tested using Poisson regression. Results: In total, 14,255 persons had 37,554 hospitalizations between 2011 and 2017. The code-based delirium rate was 3.02 per 100 hospitalizations (95% CI: 2.85, 3.20). The NLP-based rate was 7.36 per 100 (95% CI: 7.09, 7.64). Rates increased with age (both p < 0.0001). Code-based rates were higher in men compared to women (p = 0.03), but NLP-based rates were similar by sex (p = 0.89). Code-based rates were similar by race and ethnicity, but NLP-based rates were higher in the White population compared to the Black and Asian populations (p = 0.001). Both types of rates increased significantly over time (both p values < 0.001). Conclusions: The NLP algorithm identified more delirium episodes compared to the ICD code method. However, NLP may still underestimate delirium cases because of limitations in real-world clinical notes, including incomplete documentation, practice changes over time, and missing clinical notes in some time periods

    The IMPACT framework and implementation for accessible in silico clinical phenotyping in the digital era

    No full text
    Abstract Clinical phenotyping is often a foundational requirement for obtaining datasets necessary for the development of digital health applications. Traditionally done via manual abstraction, this task is often a bottleneck in development due to time and cost requirements, therefore raising significant interest in accomplishing this task via in-silico means. Nevertheless, current in-silico phenotyping development tends to be focused on a single phenotyping task resulting in a dearth of reusable tools supporting cross-task generalizable in-silico phenotyping. In addition, in-silico phenotyping remains largely inaccessible for a substantial portion of potentially interested users. Here, we highlight the barriers to the usage of in-silico phenotyping and potential solutions in the form of a framework of several desiderata as observed during our implementation of such tasks. In addition, we introduce an example implementation of said framework as a software application, with a focus on ease of adoption, cross-task reusability, and facilitating the clinical phenotyping algorithm development process

    BETA: a comprehensive benchmark for computational drug–target prediction

    No full text
    Internal validation is the most popular evaluation strategy used for drug-target predictive models. The simple random shuffling in the cross-validation, however, is not always ideal to handle large, diverse and copious datasets as it could potentially introduce bias. Hence, these predictive models cannot be comprehensively evaluated to provide insight into their general performance on a variety of use-cases (e.g. permutations of different levels of connectiveness and categories in drug and target space, as well as validations based on different data sources). In this work, we introduce a benchmark, BETA, that aims to address this gap by (i) providing an extensive multipartite network consisting of 0.97 million biomedical concepts and 8.5 million associations, in addition to 62 million drug-drug and protein-protein similarities and (ii) presenting evaluation strategies that reflect seven cases (i.e. general, screening with different connectivity, target and drug screening based on categories, searching for specific drugs and targets and drug repurposing for specific diseases), a total of seven Tests (consisting of 344 Tasks in total) across multiple sampling and validation strategies. Six state-of-the-art methods covering two broad input data types (chemical structure- and gene sequence-based and network-based) were tested across all the developed Tasks. The best-worst performing cases have been analyzed to demonstrate the ability of the proposed benchmark to identify limitations of the tested methods for running over the benchmark tasks. The results highlight BETA as a benchmark in the selection of computational strategies for drug repurposing and target discovery

    The overview of assessing heterogeneity of clinical sections across three electronic health records using embedding-based machine learning approaches.

    No full text
    EHR = Electronic health records; HL7-CDA = Health Level 7—Clinical Document Architecture; ML = Machine Learning; RF = Random Forest; BERT = Bidirectional Encoder Representations from Transformers; GEC = General Electronic Centricity EHR.</p
    corecore