12 research outputs found

    THE CMJ REVIEWS THE GIANT HEALTH EVENT 2017

    No full text

    The Small Molecule Components of Human Synovial Fluid and Bovine Calf Serum that Correlate with Cobalt Chrome (CoCrMo) Wear

    No full text
    Abstract Background:Implant wear in joint replacements is influenced by the chemical and physical properties of human synovial fluid (HSF). In vitro testing for implant wear uses 25% weight bovine calf serum (25BCS) to substitute for HSF, due to similar rheology and total protein content. However, previous studies have shown differences in the macromolecular composition. We aimed to evaluate the differences in small molecule composition between fluids and correlate these differences with their effects on implant material wear.Methods:HSF was harvested from osteoarthritis patients undergoing primary knee replacement (n=14). Nuclear magnetic resonance (NMR) spectroscopy with linear regression modelling analysed the metabolites present in HSF and commercially sourced 25BCS and investigated the differences. Wear properties of the fluids were evaluated using a validated quantitative laboratory bench-test utilising a cobalt/chromium/molybdenum (CoCrMo) ball oscillating against a CoCrMo disc and analysing the resulting wear scar. The variation in metabolite levels in both HSF and 25BCS was correlated to the wear properties of the fluids.Results:Differences in the levels of metabolites, lipids, and glycosaminoglycans (GAG) were observed between HSF and 25BCS: significance was confirmed by O-PLS-DA (p&lt;0.05). The wear of CoCrMo was found to strongly correlate with the macromolecules GAG and proteins that potentially bind to glucose and citrate. Conclusions:The small molecule concentration differences between the fluids questions the validity of 25BCS as a model for wear analysis. The demonstration of variable metabolites present in HSF which correlate with material wear has implications for implant failure and targeted therapeutic manipulation of these metabolites. Trial Registration:Ethical approval was granted by the NRES Committee London, Chelsea, UK on the 12th May 2015. The study was performed in accordance with the ethical standards in the 1964 Declaration of Helsinki.</jats:p

    Differences between infected and noninfected synovial fluid

    No full text
    Aims The diagnosis of joint infections is an inexact science using combinations of blood inflammatory markers and microscopy, culture, and sensitivity of synovial fluid (SF). There is potential for small molecule metabolites in infected SF to act as infection markers that could improve accuracy and speed of detection. The objective of this study was to use nuclear magnetic resonance (NMR) spectroscopy to identify small molecule differences between infected and noninfected human SF. Methods In all, 16 SF samples (eight infected native and prosthetic joints plus eight noninfected joints requiring arthroplasty for end-stage osteoarthritis) were collected from patients. NMR spectroscopy was used to analyze the metabolites present in each sample. Principal component analysis and univariate statistical analysis were undertaken to investigate metabolic differences between the two groups. Results A total of 16 metabolites were found in significantly different concentrations between the groups. Three were in higher relative concentrations (lipids, cholesterol, and N-acetylated molecules) and 13 in lower relative concentrations in the infected group (citrate, glycine, glycosaminoglycans, creatinine, histidine, lysine, formate, glucose, proline, valine, dimethylsulfone, mannose, and glutamine). Conclusion Metabolites found in significantly greater concentrations in the infected cohort are markers of inflammation and infection. They play a role in lipid metabolism and the inflammatory response. Those found in significantly reduced concentrations were involved in carbohydrate metabolism, nucleoside metabolism, the glutamate metabolic pathway, increased oxidative stress in the diseased state, and reduced articular cartilage breakdown. This is the first study to demonstrate differences in the metabolic profile of infected and noninfected human SF, using a noninfected matched cohort, and may represent putative biomarkers that form the basis of new diagnostic tests for infected SF. Cite this article: Bone Joint Res 2021;10(1):85–95. </jats:sec

    Using a Secure, Continually Updating, Web Source Processing Pipeline to Support the Real-Time Data Synthesis and Analysis of Scientific Literature: Development and Validation Study (Preprint)

    No full text
    BACKGROUND The scale and quality of the global scientific response to the COVID-19 pandemic have unquestionably saved lives. However, the COVID-19 pandemic has also triggered an unprecedented “infodemic”; the velocity and volume of data production have overwhelmed many key stakeholders such as clinicians and policy makers, as they have been unable to process structured and unstructured data for evidence-based decision making. Solutions that aim to alleviate this data synthesis–related challenge are unable to capture heterogeneous web data in real time for the production of concomitant answers and are not based on the high-quality information in responses to a free-text query. OBJECTIVE The main objective of this project is to build a generic, real-time, continuously updating curation platform that can support the data synthesis and analysis of a scientific literature framework. Our secondary objective is to validate this platform and the curation methodology for COVID-19–related medical literature by expanding the COVID-19 Open Research Dataset via the addition of new, unstructured data. METHODS To create an infrastructure that addresses our objectives, the PanSurg Collaborative at Imperial College London has developed a unique data pipeline based on a web crawler extraction methodology. This data pipeline uses a novel curation methodology that adopts a human-in-the-loop approach for the characterization of quality, relevance, and key evidence across a range of scientific literature sources. RESULTS REDASA (Realtime Data Synthesis and Analysis) is now one of the world’s largest and most up-to-date sources of COVID-19–related evidence; it consists of 104,000 documents. By capturing curators’ critical appraisal methodologies through the discrete labeling and rating of information, REDASA rapidly developed a foundational, pooled, data science data set of over 1400 articles in under 2 weeks. These articles provide COVID-19–related information and represent around 10% of all papers about COVID-19. CONCLUSIONS This data set can act as ground truth for the future implementation of a live, automated systematic review. The three benefits of REDASA’s design are as follows: (1) it adopts a user-friendly, human-in-the-loop methodology by embedding an efficient, user-friendly curation platform into a natural language processing search engine; (2) it provides a curated data set in the JavaScript Object Notation format for experienced academic reviewers’ critical appraisal choices and decision-making methodologies; and (3) due to the wide scope and depth of its web crawling method, REDASA has already captured one of the world’s largest COVID-19–related data corpora for searches and curation. </sec

    Using a Secure, Continually Updating, Web Source Processing Pipeline to Support the Real-Time Data Synthesis and Analysis of Scientific Literature: Development and Validation Study

    No full text
    Background The scale and quality of the global scientific response to the COVID-19 pandemic have unquestionably saved lives. However, the COVID-19 pandemic has also triggered an unprecedented “infodemic”; the velocity and volume of data production have overwhelmed many key stakeholders such as clinicians and policy makers, as they have been unable to process structured and unstructured data for evidence-based decision making. Solutions that aim to alleviate this data synthesis–related challenge are unable to capture heterogeneous web data in real time for the production of concomitant answers and are not based on the high-quality information in responses to a free-text query. Objective The main objective of this project is to build a generic, real-time, continuously updating curation platform that can support the data synthesis and analysis of a scientific literature framework. Our secondary objective is to validate this platform and the curation methodology for COVID-19–related medical literature by expanding the COVID-19 Open Research Dataset via the addition of new, unstructured data. Methods To create an infrastructure that addresses our objectives, the PanSurg Collaborative at Imperial College London has developed a unique data pipeline based on a web crawler extraction methodology. This data pipeline uses a novel curation methodology that adopts a human-in-the-loop approach for the characterization of quality, relevance, and key evidence across a range of scientific literature sources. Results REDASA (Realtime Data Synthesis and Analysis) is now one of the world’s largest and most up-to-date sources of COVID-19–related evidence; it consists of 104,000 documents. By capturing curators’ critical appraisal methodologies through the discrete labeling and rating of information, REDASA rapidly developed a foundational, pooled, data science data set of over 1400 articles in under 2 weeks. These articles provide COVID-19–related information and represent around 10% of all papers about COVID-19. Conclusions This data set can act as ground truth for the future implementation of a live, automated systematic review. The three benefits of REDASA’s design are as follows: (1) it adopts a user-friendly, human-in-the-loop methodology by embedding an efficient, user-friendly curation platform into a natural language processing search engine; (2) it provides a curated data set in the JavaScript Object Notation format for experienced academic reviewers’ critical appraisal choices and decision-making methodologies; and (3) due to the wide scope and depth of its web crawling method, REDASA has already captured one of the world’s largest COVID-19–related data corpora for searches and curation. </jats:sec

    Using a secure, continually updating, web source processing pipeline to support the real-time data synthesis and analysis of scientific literature: development and validation study

    Get PDF
    Background: The scale and quality of the global scientific response to the COVID-19 pandemic have unquestionably saved lives. However, the COVID-19 pandemic has also triggered an unprecedented “infodemic”; the velocity and volume of data production have overwhelmed many key stakeholders such as clinicians and policy makers, as they have been unable to process structured and unstructured data for evidence-based decision making. Solutions that aim to alleviate this data synthesis–related challenge are unable to capture heterogeneous web data in real time for the production of concomitant answers and are not based on the high-quality information in responses to a free-text query. Objective: The main objective of this project is to build a generic, real-time, continuously updating curation platform that can support the data synthesis and analysis of a scientific literature framework. Our secondary objective is to validate this platform and the curation methodology for COVID-19–related medical literature by expanding the COVID-19 Open Research Dataset via the addition of new, unstructured data. Methods: To create an infrastructure that addresses our objectives, the PanSurg Collaborative at Imperial College London has developed a unique data pipeline based on a web crawler extraction methodology. This data pipeline uses a novel curation methodology that adopts a human-in-the-loop approach for the characterization of quality, relevance, and key evidence across a range of scientific literature sources. Results: REDASA (Realtime Data Synthesis and Analysis) is now one of the world’s largest and most up-to-date sources of COVID-19–related evidence; it consists of 104,000 documents. By capturing curators’ critical appraisal methodologies through the discrete labeling and rating of information, REDASA rapidly developed a foundational, pooled, data science data set of over 1400 articles in under 2 weeks. These articles provide COVID-19–related information and represent around 10% of all papers about COVID-19. Conclusions: This data set can act as ground truth for the future implementation of a live, automated systematic review. The three benefits of REDASA’s design are as follows: (1) it adopts a user-friendly, human-in-the-loop methodology by embedding an efficient, user-friendly curation platform into a natural language processing search engine; (2) it provides a curated data set in the JavaScript Object Notation format for experienced academic reviewers’ critical appraisal choices and decision-making methodologies; and (3) due to the wide scope and depth of its web crawling method, REDASA has already captured one of the world’s largest COVID-19–related data corpora for searches and curation

    Tocilizumab in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial

    No full text

    Aspirin in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial

    No full text
    corecore