23 research outputs found

    Contrast-Induced Nephropathy in Renal Transplant Recipients: A Single Center Experience

    Get PDF
    BACKGROUND: Contrast-induced nephropathy (CIN) in native kidneys is associated with a significant increase in mortality and morbidity. Data regarding CIN in renal allografts are limited, however. We retrospectively studied CIN in renal allografts at our institution: its incidence, risk factors, and effect on long-term outcomes including allograft loss and death. METHODS: One hundred thirty-five renal transplant recipients undergoing 161 contrast-enhanced computed tomography (CT) scans or coronary angiograms (Cath) between years 2000 and 2014 were identified. Contrast agents were iso- or low osmolar. CIN was defined as a rise in serum creatinine (SCr) by \u3e0.3 mg/dl or 25% from baseline within 4 days of contrast exposure. After excluding 85 contrast exposures where patients had no SCr within 4 days of contrast administration, 76 exposures (CT: RESULTS: Incidence of CIN was 13% following both, CT (6 out of 45) and Cath (4 out of 31). Significant bivariate predictors of CIN were IV fluid administration ( CONCLUSION: CIN is common in kidney transplant recipients, and there is room for quality improvement with regards to careful renal function monitoring post-contrast exposure. In our study

    Evolution of long-term vaccine-induced and hybrid immunity in healthcare workers after different COVID-19 vaccine regimens

    Get PDF
    BACKGROUND: Both infection and vaccination, alone or in combination, generate antibody and T cell responses against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). However, the maintenance of such responses-and hence protection from disease-requires careful characterization. In a large prospective study of UK healthcare workers (HCWs) (Protective Immunity from T Cells in Healthcare Workers [PITCH], within the larger SARS-CoV-2 Immunity and Reinfection Evaluation [SIREN] study), we previously observed that prior infection strongly affected subsequent cellular and humoral immunity induced after long and short dosing intervals of BNT162b2 (Pfizer/BioNTech) vaccination. METHODS: Here, we report longer follow-up of 684 HCWs in this cohort over 6-9 months following two doses of BNT162b2 or AZD1222 (Oxford/AstraZeneca) vaccination and up to 6 months following a subsequent mRNA booster vaccination. FINDINGS: We make three observations: first, the dynamics of humoral and cellular responses differ; binding and neutralizing antibodies declined, whereas T and memory B cell responses were maintained after the second vaccine dose. Second, vaccine boosting restored immunoglobulin (Ig) G levels; broadened neutralizing activity against variants of concern, including Omicron BA.1, BA.2, and BA.5; and boosted T cell responses above the 6-month level after dose 2. Third, prior infection maintained its impact driving larger and broader T cell responses compared with never-infected people, a feature maintained until 6 months after the third dose. CONCLUSIONS: Broadly cross-reactive T cell responses are well maintained over time-especially in those with combined vaccine and infection-induced immunity ("hybrid" immunity)-and may contribute to continued protection against severe disease

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    T-cell and antibody responses to first BNT162b2 vaccine dose in previously infected and SARS-CoV-2-naive UK health-care workers: a multicentre prospective cohort study

    Get PDF
    Background Previous infection with SARS-CoV-2 affects the immune response to the first dose of the SARS-CoV-2 vaccine. We aimed to compare SARS-CoV-2-specific T-cell and antibody responses in health-care workers with and without previous SARS-CoV-2 infection following a single dose of the BNT162b2 (tozinameran; Pfizer–BioNTech) mRNA vaccine. Methods We sampled health-care workers enrolled in the PITCH study across four hospital sites in the UK (Oxford, Liverpool, Newcastle, and Sheffield). All health-care workers aged 18 years or older consenting to participate in this prospective cohort study were included, with no exclusion criteria applied. Blood samples were collected where possible before vaccination and 28 (±7) days following one or two doses (given 3–4 weeks apart) of the BNT162b2 vaccine. Previous infection was determined by a documented SARS-CoV-2-positive RT-PCR result or the presence of positive anti-SARS-CoV-2 nucleocapsid antibodies. We measured spike-specific IgG antibodies and quantified T-cell responses by interferon-γ enzyme-linked immunospot assay in all participants where samples were available at the time of analysis, comparing SARS-CoV-2-naive individuals to those with previous infection. Findings Between Dec 9, 2020, and Feb 9, 2021, 119 SARS-CoV-2-naive and 145 previously infected health-care workers received one dose, and 25 SARS-CoV-2-naive health-care workers received two doses, of the BNT162b2 vaccine. In previously infected health-care workers, the median time from previous infection to vaccination was 268 days (IQR 232–285). At 28 days (IQR 27–33) after a single dose, the spike-specific T-cell response measured in fresh peripheral blood mononuclear cells (PBMCs) was higher in previously infected (n=76) than in infection-naive (n=45) health-care workers (median 284 [IQR 150–461] vs 55 [IQR 24–132] spot-forming units [SFUs] per 106 PBMCs; p<0·0001). With cryopreserved PBMCs, the T-cell response in previously infected individuals (n=52) after one vaccine dose was equivalent to that of infection-naive individuals (n=19) after receiving two vaccine doses (median 152 [IQR 119–275] vs 162 [104–258] SFUs/106 PBMCs; p=1·00). Anti-spike IgG antibody responses following a single dose in 142 previously infected health-care workers (median 270 373 [IQR 203 461–535 188] antibody units [AU] per mL) were higher than in 111 infection-naive health-care workers following one dose (35 001 [17 099–55 341] AU/mL; p<0·0001) and higher than in 25 infection-naive individuals given two doses (180 904 [108 221–242 467] AU/mL; p<0·0001). Interpretation A single dose of the BNT162b2 vaccine is likely to provide greater protection against SARS-CoV-2 infection in individuals with previous SARS-CoV-2 infection, than in SARS-CoV-2-naive individuals, including against variants of concern. Future studies should determine the additional benefit of a second dose on the magnitude and durability of immune responses in individuals vaccinated following infection, alongside evaluation of the impact of extending the interval between vaccine doses. Funding UK Department of Health and Social Care, and UK Coronavirus Immunology Consortium

    Immunogenicity of standard and extended dosing intervals of BNT162b2 mRNA vaccine

    Get PDF
    Extension of the interval between vaccine doses for the BNT162b2 mRNA vaccine was introduced in the United Kingdom to accelerate population coverage with a single dose. At this time, trial data were lacking, and we addressed this in a study of United Kingdom healthcare workers. The first vaccine dose induced protection from infection from the circulating alpha (B.1.1.7) variant over several weeks. In a substudy of 589 individuals, we show that this single dose induces severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) neutralizing antibody (NAb) responses and a sustained B and T cell response to the spike protein. NAb levels were higher after the extended dosing interval (6–14 weeks) compared with the conventional 3- to 4-week regimen, accompanied by enrichment of CD4+ T cells expressing interleukin-2 (IL-2). Prior SARS-CoV-2 infection amplified and accelerated the response. These data on dynamic cellular and humoral responses indicate that extension of the dosing interval is an effective immunogenic protocol

    Outcomes following small bowel obstruction due to malignancy in the national audit of small bowel obstruction

    Get PDF
    Introduction Patients with cancer who develop small bowel obstruction are at high risk of malnutrition and morbidity following compromise of gastrointestinal tract continuity. This study aimed to characterise current management and outcomes following malignant small bowel obstruction. Methods A prospective, multicentre cohort study of patients with small bowel obstruction who presented to UK hospitals between 16th January and 13th March 2017. Patients who presented with small bowel obstruction due to primary tumours of the intestine (excluding left-sided colonic tumours) or disseminated intra-abdominal malignancy were included. Outcomes included 30-day mortality and in-hospital complications. Cox-proportional hazards models were used to generate adjusted effects estimates, which are presented as hazard ratios (HR) alongside the corresponding 95% confidence interval (95% CI). The threshold for statistical significance was set at the level of P ≤ 0.05 a-priori. Results 205 patients with malignant small bowel obstruction presented to emergency surgery services during the study period. Of these patients, 50 had obstruction due to right sided colon cancer, 143 due to disseminated intraabdominal malignancy, 10 had primary tumours of the small bowel and 2 patients had gastrointestinal stromal tumours. In total 100 out of 205 patients underwent a surgical intervention for obstruction. 30-day in-hospital mortality rate was 11.3% for those with primary tumours and 19.6% for those with disseminated malignancy. Severe risk of malnutrition was an independent predictor for poor mortality in this cohort (adjusted HR 16.18, 95% CI 1.86 to 140.84, p = 0.012). Patients with right-sided colon cancer had high rates of morbidity. Conclusions Mortality rates were high in patients with disseminated malignancy and in those with right sided colon cancer. Further research should identify optimal management strategy to reduce morbidity for these patient groups

    Evidence synthesis to inform model-based cost-effectiveness evaluations of diagnostic tests: a methodological systematic review of health technology assessments

    Get PDF
    Background: Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. Methods: We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. Results: The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. Conclusions: The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests

    Erratum to: Methods for evaluating medical tests and biomarkers

    Get PDF
    [This corrects the article DOI: 10.1186/s41512-016-0001-y.]

    Contrast-Induced Nephropathy in Renal Transplant Recipients: A Single Center Experience

    Get PDF
    BackgroundContrast-induced nephropathy (CIN) in native kidneys is associated with a significant increase in mortality and morbidity. Data regarding CIN in renal allografts are limited, however. We retrospectively studied CIN in renal allografts at our institution: its incidence, risk factors, and effect on long-term outcomes including allograft loss and death.MethodsOne hundred thirty-five renal transplant recipients undergoing 161 contrast-enhanced computed tomography (CT) scans or coronary angiograms (Cath) between years 2000 and 2014 were identified. Contrast agents were iso- or low osmolar. CIN was defined as a rise in serum creatinine (SCr) by &gt;0.3 mg/dl or 25% from baseline within 4 days of contrast exposure. After excluding 85 contrast exposures where patients had no SCr within 4 days of contrast administration, 76 exposures (CT: n = 45; Cath: n = 31) in 50 eligible patients were analyzed. Risk factors assessed included demographics, comorbid conditions, type/volume of contrast agent used, IV fluids, N-acetylcysteine administration, and calcineurin inhibitor use. Bivariate and multivariable analyses were used to assess the risk of CIN.ResultsIncidence of CIN was 13% following both, CT (6 out of 45) and Cath (4 out of 31). Significant bivariate predictors of CIN were IV fluid administration (p = 0.05), lower hemoglobin (p = 0.03), and lower albumin (p = 0.02). In a multivariable model, CIN was predicted by N-acetylcysteine (p = 0.03) and lower hemoglobin (p = 0.01). Calcineurin inhibitor use was not associated with CIN. At last follow-up, CIN did not affect allograft or patient survival.ConclusionCIN is common in kidney transplant recipients, and there is room for quality improvement with regards to careful renal function monitoring post-contrast exposure. In our study, N-acetylcysteine exposure and lower hemoglobin were associated with CIN. Calcineurin inhibitor use was not associated with CIN. Our sample size is small, however, and larger prospective studies of CIN in renal allografts are needed

    SINGLE CELL TRANSCRIPTOMIC ANALYSIS OF RENAL ALLOGRAFT REJECTION REVEALS NOVEL INSIGHTS INTO INTRAGRAFT TCR CLONALITY

    No full text
    Bulk analysis of renal allograft biopsies (rBx) identified RNA transcripts associated with acute cellular rejection (ACR); however, these lacked cellular context critical to mechanistic understanding of how rejection occurs despite immunosuppression (IS). We performed combined single-cell RNA transcriptomic and TCR-α/β sequencing on rBx from patients with ACR under differing IS drugs: tacrolimus, iscalimab, and belatacept. We found distinct CD8+ T cell phenotypes (e.g., effector, memory, exhausted) depending upon IS type, particularly within expanded CD8+ T cell clonotypes (CD8EXP). Gene expression of CD8EXP identified therapeutic targets that were influenced by IS type. TCR analysis revealed a highly restricted number of CD8EXP, independent of HLA mismatch or IS type. Subcloning of TCR-α/β cDNAs from CD8EXP into Jurkat 76 cells (TCR–/–) conferred alloreactivity by mixed lymphocyte reaction. Analysis of sequential rBx samples revealed persistence of CD8EXP that decreased, but were not eliminated, after successful antirejection therapy. In contrast, CD8EXP were maintained in treatment-refractory rejection. Finally, most rBx-derived CD8EXP were also observed in matching urine samples, providing precedent for using urine-derived CD8EXP as a surrogate for those found in the rejecting allograft. Overall, our data define the clonal CD8+ T cell response to ACR, paving the next steps for improving detection, assessment, and treatment of rejection
    corecore