158 research outputs found

    Impact of cardiometabolic multimorbidity and ethnicity on cardiovascular/renal complications in patients with COVID-19

    Get PDF
    OBJECTIVE: Using a large national database of people hospitalised with COVID-19, we investigated the contribution of cardio-metabolic conditions, multi-morbidity and ethnicity on the risk of in-hospital cardiovascular complications and death. METHODS: A multicentre, prospective cohort study in 302 UK healthcare facilities of adults hospitalised with COVID-19 between 6 February 2020 and 16 March 2021. Logistic models were used to explore associations between baseline patient ethnicity, cardiometabolic conditions and multimorbidity (0, 1, 2, >2 conditions), and in-hospital cardiovascular complications (heart failure, arrhythmia, cardiac ischaemia, cardiac arrest, coagulation complications, stroke), renal injury and death. RESULTS: Of 65 624 patients hospitalised with COVID-19, 44 598 (68.0%) reported at least one cardiometabolic condition on admission. Cardiovascular/renal complications or death occurred in 24 609 (38.0%) patients. Baseline cardiometabolic conditions were independently associated with increased odds of in-hospital complications and this risk increased in the presence of cardiometabolic multimorbidity. For example, compared with having no cardiometabolic conditions, 1, 2 or ≥3 conditions was associated with 1.46 (95% CI 1.39 to 1.54), 2.04 (95% CI 1.93 to 2.15) and 3.10 (95% CI 2.92 to 3.29) times higher odds of any cardiovascular/renal complication, respectively. A similar pattern was observed for all-cause death. Compared with the white group, the South Asian (OR 1.19, 95% CI 1.10 to 1.29) and black (OR 1.53 to 95% CI 1.37 to 1.72) ethnic groups had higher risk of any cardiovascular/renal complication. CONCLUSIONS: In hospitalised patients with COVID-19, cardiovascular complications or death impacts just under half of all patients, with the highest risk in those of South Asian or Black ethnicity and in patients with cardiometabolic multimorbidit

    Onset and window of SARS-CoV-2 infectiousness and temporal correlation with symptom onset: a prospective, longitudinal, community cohort study.

    Get PDF
    BACKGROUND: Knowledge of the window of SARS-CoV-2 infectiousness is crucial in developing policies to curb transmission. Mathematical modelling based on scarce empirical evidence and key assumptions has driven isolation and testing policy, but real-world data are needed. We aimed to characterise infectiousness across the full course of infection in a real-world community setting. METHODS: The Assessment of Transmission and Contagiousness of COVID-19 in Contacts (ATACCC) study was a UK prospective, longitudinal, community cohort of contacts of newly diagnosed, PCR-confirmed SARS-CoV-2 index cases. Household and non-household exposed contacts aged 5 years or older were eligible for recruitment if they could provide informed consent and agree to self-swabbing of the upper respiratory tract. The primary objective was to define the window of SARS-CoV-2 infectiousness and its temporal correlation with symptom onset. We quantified viral RNA load by RT-PCR and infectious viral shedding by enumerating cultivable virus daily across the course of infection. Participants completed a daily diary to track the emergence of symptoms. Outcomes were assessed with empirical data and a phenomenological Bayesian hierarchical model. FINDINGS: Between Sept 13, 2020, and March 31, 2021, we enrolled 393 contacts from 327 households (the SARS-CoV-2 pre-alpha and alpha variant waves); and between May 24, 2021, and Oct 28, 2021, we enrolled 345 contacts from 215 households (the delta variant wave). 173 of these 738 contacts were PCR positive for more than one timepoint, 57 of which were at the start of infection and comprised the final study population. The onset and end of infectious viral shedding were captured in 42 cases and the median duration of infectiousness was 5 (IQR 3-7) days. Although 24 (63%) of 38 cases had PCR-detectable virus before symptom onset, only seven (20%) of 35 shed infectious virus presymptomatically. Symptom onset was a median of 3 days before both peak viral RNA and peak infectious viral load (viral RNA IQR 3-5 days, n=38; plaque-forming units IQR 3-6 days, n=35). Notably, 22 (65%) of 34 cases and eight (24%) of 34 cases continued to shed infectious virus 5 days and 7 days post-symptom onset, respectively (survival probabilities 67% and 35%). Correlation of lateral flow device (LFD) results with infectious viral shedding was poor during the viral growth phase (sensitivity 67% [95% CI 59-75]), but high during the decline phase (92% [86-96]). Infectious virus kinetic modelling suggested that the initial rate of viral replication determines the course of infection and infectiousness. INTERPRETATION: Less than a quarter of COVID-19 cases shed infectious virus before symptom onset; under a crude 5-day self-isolation period from symptom onset, two-thirds of cases released into the community would still be infectious, but with reduced infectious viral shedding. Our findings support a role for LFDs to safely accelerate deisolation but not for early diagnosis, unless used daily. These high-resolution, community-based data provide evidence to inform infection control guidance. FUNDING: National Institute for Health and Care Research

    Prostate-specific antigen at or before age 50 as a predictor of advanced prostate cancer diagnosed up to 25 years later: A case-control study

    Get PDF
    BACKGROUND: Based on a large, representative unscreened cohort from Malmö, Sweden, we have recently reported that a single prostate-specific antigen (PSA) measurement at or before age 50 is a strong predictor of prostate cancer occurring up to 25 years subsequently. We aimed to determine whether this association holds for advanced cancers, defined as clinical stage T3 or higher, or skeletal metastasis at the time of the cancer diagnosis. METHODS: In 1974-1986 blood samples were obtained from a cohort of 21,277 men aged up to 50. Through 1999, 498 men were diagnosed with prostate cancer, and of these 161 had locally advanced or metastatic prostate cancers. Three controls, matched for age and date of venipuncture, were selected for each case. Conditional logistic regression was used to test associations between molecular markers and advanced cancer. RESULTS: Median time from venipuncture to diagnosis was 17 years. Levels of all PSA forms and hK2 were associated with case status. Total PSA was a strong and statistically significant predictor of subsequent advanced cancer (area under the curve 0.791; p < 0.0005). Two-thirds of the advanced cancer cases occurred in men with the top 20% of PSA levels (0.9 ng/ml or higher). CONCLUSION: A single PSA test taken at or before age 50 is a very strong predictor of advanced prostate cancer diagnosed up to 25 years later. This suggests the possibility of using an early PSA test to risk-stratify patients so that men at highest risk are the focus of the most intensive screening efforts

    Detection and quantification of antibody to SARS CoV 2 receptor binding domain provides enhanced sensitivity, specificity and utility

    Get PDF
    Accurate and sensitive detection of antibody to SARS-CoV-2 remains an essential component of the pandemic response. Measuring antibody that predicts neutralising activity and the vaccine response is an absolute requirement for laboratory-based confirmatory and reference activity. The viral receptor binding domain (RBD) constitutes the prime target antigen for neutralising antibody. A double antigen binding assay (DABA), providing the most sensitive format has been exploited in a novel hybrid manner employing a solid-phase S1 preferentially presenting RBD, coupled with a labelled RBD conjugate, used in a two-step sequential assay for detection and measurement of antibody to RBD (anti-RBD). This class and species neutral assay showed a specificity of 100% on 825 pre COVID-19 samples and a potential sensitivity of 99.6% on 276 recovery samples, predicting quantitatively the presence of neutralising antibody determined by pseudo-type neutralisation and by plaque reduction. Anti-RBD is also measurable in ferrets immunised with ChadOx1 nCoV-19 vaccine and in humans immunised with both AstraZeneca and Pfizer vaccines. This assay detects anti-RBD at presentation with illness, demonstrates its elevation with disease severity, its sequel to asymptomatic infection and its persistence after the loss of antibody to the nucleoprotein (anti-NP). It also provides serological confirmation of prior infection and offers a secure measure for seroprevalence and studies of vaccine immunisation in human and animal populations. The hybrid DABA also displays the attributes necessary for the detection and quantification of anti-RBD to be used in clinical practice. An absence of detectable anti-RBD by this assay predicates the need for passive immune prophylaxis in at-risk patients

    Outcomes research in the development and evaluation of practice guidelines

    Get PDF
    BACKGROUND: Practice guidelines have been developed in response to the observation that variations exist in clinical medicine that are not related to variations in the clinical presentation and severity of the disease. Despite their widespread use, however, practice guideline evaluation lacks a rigorous scientific methodology to support its development and application. DISCUSSION: Firstly, we review the major epidemiological foundations of practice guideline development. Secondly, we propose a chronic disease epidemiological model in which practice patterns are viewed as the exposure and outcomes of interest such as quality or cost are viewed as the disease. Sources of selection, information, confounding and temporal trend bias are identified and discussed. SUMMARY: The proposed methodological framework for outcomes research to evaluate practice guidelines reflects the selection, information and confounding biases inherent in its observational nature which must be accounted for in both the design and the analysis phases of any outcomes research study

    Follow-up of patients with curatively resected colorectal cancer: a practice guideline

    Get PDF
    BACKGROUND: A systematic review was conducted to evaluate the literature regarding the impact of follow-up on colorectal cancer patient survival and, in a second phase, recommendations were developed. METHODS: The MEDLINE, CANCERLIT, and Cochrane Library databases, and abstracts published in the 1997 to 2002 proceedings of the annual meeting of the American Society of Clinical Oncology were systematically searched for evidence. Study selection was limited to randomized trials and meta-analyses that examined different programs of follow-up after curative resection of colorectal cancer where five-year overall survival was reported. External review by Ontario practitioners was obtained through a mailed survey. Final approval of the practice guideline report was obtained from the Practice Guidelines Coordinating Committee. RESULTS: Six randomized trials and two published meta-analyses of follow-up were obtained. Of six randomized trials comparing one follow-up program to a more intense program, only two individual trials detected a statistically significant survival benefit favouring the more intense follow-up program. Pooling of all six randomized trials demonstrated a significant improvement in survival favouring more intense follow-up (Relative Risk Ratio 0.80 (95%CI, 0.70 to 0.91; p = 0.0008). Although the rate of recurrence was similar in both of the follow-up groups compared, asymptomatic recurrences and re-operations for cure of recurrences were more common in patients with more intensive follow-up. Trials including CEA monitoring and liver imaging also had significant results, whereas trials not including these tests did not. CONCLUSION: Follow-up programs for patients with curatively resected colorectal cancer do improve survival. These follow-up programs include frequent visits and performance of blood CEA, chest x-rays, liver imaging and colonoscopy, however, it is not clear which tests or frequency of visits is optimal. There is a suggestion that improved survival is due to diagnosis of recurrence at an earlier, asymptomatic stage which allows for more curative resection of recurrence. Based on this evidence and consideration of the biology of colorectal cancer and present practices, a guideline was developed. Patients should be made aware of the risk of disease recurrence or second bowel cancer, the potential benefits of follow-up and the uncertainties requiring further clinical trials. For patients at high-risk of recurrence (stages IIb and III) clinical assessment is recommended when symptoms occur or at least every 6 months the first 3 years and yearly for at least 5 years. At the time of those visits, patients may have blood CEA, chest x-ray and liver imaging. For patients at lower risk of recurrence (stages I and Ia) or those with co-morbidities impairing future surgery, only visits yearly or when symptoms occur. All patients should have a colonoscopy before or within 6 months of initial surgery, and repeated yearly if villous or tubular adenomas >1 cm are found; otherwise repeat every 3 to 5 years. All patients having recurrences should be assessed by a multidisciplinary team in a cancer centre

    Policymakers\u27 experience of a capacity-building intervention designed to increase their use of research: A realist process evaluation

    Get PDF
    Background: An intervention’s success depends on how participants interact with it in local settings. Process evaluation examines these interactions, indicating why an intervention was or was not effective, and how it (and similar interventions) can be improved for better contextual fit. This is particularly important for innovative trials like Supporting Policy In health with Research: an Intervention Trial (SPIRIT), where causal mechanisms are poorly understood. SPIRIT was testing a multi-component intervention designed to increase the capacity of health policymakers to use research. Methods: Our mixed-methods process evaluation sought to explain variation in observed process effects across the six agencies that participated in SPIRIT. Data collection included observations of intervention workshops (n = 59), purposively sampled interviews (n = 76) and participant feedback forms (n = 553). Using a realist approach, data was coded for context-mechanism-process effect configurations (retroductive analysis) by two authors. Results: Intervention workshops were very well received. There was greater variation of views regarding other aspects of SPIRIT such as data collection, communication and the intervention’s overall value. We identified nine inter-related mechanisms that were crucial for engaging participants in these policy settings: (1) Accepting the premise (agreeing with the study’s assumptions); (2) Self-determination (participative choice); (3) The Value Proposition (seeing potential gain); (4) ‘Getting good stuff’ (identifying useful ideas, resources or connections); (5) Self-efficacy (believing ‘we can do this!’); (6) Respect (feeling that SPIRIT understands and values one’s work); (7) Confidence (believing in the study’s integrity and validity); (8) Persuasive leadership (authentic and compelling advocacy from leaders); and (9) Strategic insider facilitation (local translation and mediation). These findings were used to develop tentative explanatory propositions and to revise the programme theory. Conclusion: This paper describes how SPIRIT functioned in six policy agencies, including why strategies that worked well in one site were less effective in others. Findings indicate a complex interaction between participants’ perception of the intervention, shifting contextual factors, and the form that the intervention took in each site. Our propositions provide transferable lessons about contextualised areas of strength and weakness that may be useful in the development and implementation of similar studies

    Complications related to deep venous thrombosis prophylaxis in trauma: a systematic review of the literature

    Get PDF
    Deep venous thrombosis prophylaxis is essential to the appropriate management of multisystem trauma patients. Without thromboprophylaxis, the rate of venous thrombosis and subsequent pulmonary embolism is substantial. Three prophylactic modalities are common: pharmacologic anticoagulation, mechanical compression devices, and inferior vena cava filtration. A systematic review was completed using PRISMA guidelines to evaluate the potential complications of DVT prophylactic options. Level one evidence currently supports the use of low molecular weight heparins for thromboprophylaxis in the trauma patient. Unfortunately, multiple techniques are not infrequently required for complex multisystem trauma patients. Each modality has potential complications. The risks of heparin include bleeding and heparin induced thrombocytopenia. Mechanical compression devices can result in local soft tissue injury, bleeding and patient non-compliance. Inferior vena cava filters migrate, cause inferior vena cava occlusion, and penetrate the vessel wall. While the use of these techniques can be life saving, they must be appropriately utilized

    A922 Sequential measurement of 1 hour creatinine clearance (1-CRCL) in critically ill patients at risk of acute kidney injury (AKI)

    Get PDF
    Meeting abstrac
    corecore