191 research outputs found

    Reusing routinely collected clinical data for medical device surveillance

    Get PDF
    Background Following the public healthcare scandal surrounding Poly Implant Prothèse breast implants, there is increased focus on the surveillance of medical devices. A number of clinical specialties in the UK prospectively collect clinical data on procedures performed. We explore a surveillance programme in the case study of prosthetic aortic valve heart implants, reusing routinely collected data. Methods Demographic, comorbidity and operative pseudonymised data from the UK National Adult Cardiac Surgery Audit registry were extracted for all patients undergoing an aortic valve replacement (AVR) operation from 1998-onwards. Rules were developed to classify implants, recorded as free-text, by manufacturer, series, model and prosthesis type, and cleaning algorithms applied to the dataset. Patient outcomes are assessed across implants. Long-term mortality follow-up was tracked by record linkage to the Office for National Statistics death register, and surgical re-intervention tracked by reoccurrence in the registry. Results Data on 95,000 AVR operations were extracted. Prosthetic implants were classified into 97 models from ten manufacturers. There were substantial differences in implant volumes by manufacturers, deconstructed into temporal trends, prosthesis type and models, and healthcare providers. Significant differences were observed in outcomes between models. These differences are influenced by case-mix selection bias. Conclusion Reuse of routinely collected clinical data for medical device surveillance is viable and economically effective. Data collected, when properly analysed, can potentially be used to detect inferior devices, inform manufacturers and clinicians of device quality, supplement research, facilitate development of (inter-) national clinical guidelines for implant choice and inform businesses and healthcare procurement officers about market access. Linkage to other routinely collected data, including Hospital Episode Statistics, product data and other audits, offer richer surveillance capabilities

    Gold Standard Online Debates Summaries and First Experiments Towards Automatic Summarization of Online Debate Data

    Full text link
    Usage of online textual media is steadily increasing. Daily, more and more news stories, blog posts and scientific articles are added to the online volumes. These are all freely accessible and have been employed extensively in multiple research areas, e.g. automatic text summarization, information retrieval, information extraction, etc. Meanwhile, online debate forums have recently become popular, but have remained largely unexplored. For this reason, there are no sufficient resources of annotated debate data available for conducting research in this genre. In this paper, we collected and annotated debate data for an automatic summarization task. Similar to extractive gold standard summary generation our data contains sentences worthy to include into a summary. Five human annotators performed this task. Inter-annotator agreement, based on semantic similarity, is 36% for Cohen's kappa and 48% for Krippendorff's alpha. Moreover, we also implement an extractive summarization system for online debates and discuss prominent features for the task of summarizing online debate data automatically.Comment: accepted and presented at the CICLING 2017 - 18th International Conference on Intelligent Text Processing and Computational Linguistic

    Coronary bypass grafting using crossclamp fibrillation does not result in reliable reperfusion of the myocardium when the crossclamp is intermittently released: a prospective cohort study

    Get PDF
    BACKGROUND: Cross-clamp fibrillation is a well established method of performing coronary grafting, but its clinical effect on the myocardium is unknown. We sought to measure these effects clinically using the Khuri Intramyocardial pH monitor. METHODS: 50 episodes of cross-clamping were recorded in 16 patients who underwent CABG with crossclamp-fibrillation. An Intramyocardial pH probe measured the level of acidosis in the anterior and posterior myocardium in real-time. The pH at the start and end of each period of cross-clamping was recorded. RESULTS: It became very apparent that the pH of some patients recovered quickly while others entirely failed to recover. Thus the patients were split into 2 groups according to whether the pH recovered to above 6.8 after the first crossclamp-release (N = 8 in each group). Initial pH was 7.133 (range 6.974–7.239). After the first period of crossclamping the pH dropped to 6.381 (range 6.034–6.684). The pH in recoverers prior to the second XC application was 6.990(range 6.808–7.222) compared to only 6.455 (range 6.200–6.737) in patient's whose myocardium did not recover (P < 0.0005). This finding was repeated after the second XC release (mean pH 7.005 vs 6.537) and the third (mean pH 6.736 vs 6.376). However prior to separation from bypass the pH was close to the initial pH in both groups (7.062 vs 7.038). CONCLUSION: Crossclamp fibrillation does not result in reliable reperfusion of the myocardium between periods of crossclamping

    The UK Heart Valve Registry web-tool

    Get PDF
    Introduction The UK Heart Valve Registry (Taylor, 1997) was established in 1986, but funding was withdrawn for this project in 2004. The SCTS has collected implant data through the National Adult Cardiac Surgery Audit (NACSA) registry. An interactive web tool is developed that allows users to analyse trends and long- term survival by prosthesis manufacturer, model, valve, and time period. Materials and Methods In addition to clinical data collected in the NACSA registry, valve name and model data is also recorded. However, this data is in free-text format. An algorithm is developed to map each AVR record to brand and model. An interactive web tool is developed using R-Studio Shiny. Trends are shown using Google Charts. A reactive Kaplan-Meier plot allows for survival curve display and comparison using record linkage to the ONS database. Results The tool currently exists for the aortic valve. There were >8000 different entries in the database, which mapped to ~100 valves. Missing data or completely unmatchable valves account for ~17% of the data. In total there is data on >87,000 AVR implants. The web tool allows access to a number of interesting trends about the implantation rates, market share dynamics and survival data. We will demonstrate the app live at the meeting presentation. Discussion Following the PPI breast implant saga, there has been an increased focus on the monitoring of medical devices. There are a range of prostheses and sub- prostheses, with novel technology emerging. It is important to be able to track and monitor these prostheses. This web tool, which is still in development, begins to address these issues. In time this tool will be expanded to cover other valves and repair devices. Conclusion The UK Heart Valve Registry was disbanded a decade ago, leaving a large gap in the knowledge of the valve market and outcomes. Using routinely collected data and modern web app development tools, we can explore this data. References Taylor K. The United Kingdom Heart Valve Registry: the first 10 years. Heart. 1997 April; 77(4): 295–296

    Statistical and data reporting guidelines for the European Journal of Cardio-Thoracic Surgery and the Interactive CardioVascular and Thoracic Surgery

    Get PDF
    As part of the peer review process for the European Journal of Cardio-Thoracic Surgery (EJCTS) and the Interactive CardioVascular and Thoracic Surgery (ICVTS), a statistician reviews any manuscript that includes a statistical analysis. To facilitate authors considering submitting a manuscript and to make it clearer about the expectations of the statistical reviewers, we present up-to-date guidelines for authors on statistical and data reporting specifically in these journals. The number of statistical methods used in the cardiothoracic literature is vast, as are the ways in which data are presented. Therefore, we narrow the scope of these guidelines to cover the most common applications submitted to the EJCTS and ICVTS, focusing in particular on those that the statistical reviewers most frequently comment o

    Statistical primer: sample size and power calculations-why, when and how?

    Get PDF
    When designing a clinical study, a fundamental aspect is the sample size. In this article, we describe the rationale for sample size calculations, when it should be calculated and describe the components necessary to calculate it. For simple studies, standard formulae can be used; however, for more advanced studies, it is generally necessary to use specialized statistical software programs and consult a biostatistician. Sample size calculations for non-randomized studies are also discussed and two clinical examples are used for illustration

    EACTS/ESCVS best practice guidelines for reporting treatment results in the thoracic aorta

    Get PDF
    Endovascular treatment of the thoracic aorta (TEVAR) is rapidly expanding, with new devices and techniques, combined with classical surgical approaches in hybrid procedures. The present guidelines provide a standard format for reporting results of treatment in the thoracic aorta, and to facilitate analysis of clinical results in various therapeutic approaches. These guidelines specify the essential information and definitions, which should be provided in each article about TEVAR: Definitions of disease conditions Extent of the disease Comorbidities Exact demographics of the patient material Description of the procedure performed Devices which were utilized Methods for reporting early and late mortality, and morbidity Reinterventions and additional procedures Statistical evaluation It is hoped that strict adherence to these criteria will make the future publications about TEVAR more comparable, and will enable the readership to draw their own, scientifically validated conclusions about the report

    Malthusian Reinforcement Learning

    Get PDF
    Here we explore a new algorithmic framework for multi-agent reinforcement learning, called Malthusian reinforcement learning, which extends self-play to include fitness-linked population size dynamics that drive ongoing innovation. In Malthusian RL, increases in a subpopulation's average return drive subsequent increases in its size, just as Thomas Malthus argued in 1798 was the relationship between preindustrial income levels and population growth. Malthusian reinforcement learning harnesses the competitive pressures arising from growing and shrinking population size to drive agents to explore regions of state and policy spaces that they could not otherwise reach. Furthermore, in environments where there are potential gains from specialization and division of labor, we show that Malthusian reinforcement learning is better positioned to take advantage of such synergies than algorithms based on self-play.Comment: 9 pages, 2 tables, 4 figure
    • …
    corecore