627 research outputs found

    On the Sensitivity and Specificity of Postmortem Upper Respiratory Tract Testing for SARS-CoV-2

    Get PDF
    Background Postmortem testing can improve our understanding of the impact of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) if sufficiently sensitive and specific. Methods We investigated the postmortem sensitivity and specificity of reverse transcriptase polymerase chain reaction (PCR) testing on upper respiratory swabs using a dataset of everyone tested for SARS-CoV-2 before and after death in England, 1 March to 29 October 2020. We analyzed sensitivity in those with a positive test before death by time to postmortem test. We developed a multivariate model and conducted time-to-negativity survival analysis. For specificity, we analyzed those with a negative test in the week before death. Results Postmortem testing within a week after death had a sensitivity of 96.8% if the person had tested positive within a week before death. There was no effect of age, sex, or specimen type on sensitivity, but individuals with coronavirus disease 2019 (COVID-19)–related codes on their death certificate were 5.65 times more likely to test positive after death (95% confidence interval, 2.31–13.9). Specificity was 94.2%, increasing to 97.5% in individuals without COVID-19 on the death certificate. Conclusion Postmortem testing has high sensitivity (96.8%) and specificity (94.2%) if performed within a week after death and could be a useful diagnostic tool

    Mortality and causes of death in people diagnosed with HIV in the era of highly active antiretroviral therapy compared with the general population: an analysis of a national observational cohort

    Get PDF
    BACKGROUND: Deaths in HIV-positive people have decreased since the introduction of highly active antiretroviral therapy (HAART) in 1996. Fewer AIDS-related deaths and an ageing cohort have resulted in an increase in the proportion of HIV patients dying from non-AIDS-related disorders. Here we describe mortality and causes of death in people diagnosed with HIV in the HAART era compared with the general population. METHODS: In this observational analysis, we linked cohort data collected by Public Health England (PHE) for individuals aged 15 years and older, diagnosed with HIV in England and Wales from 1997 to 2012, to the Office for National Statistics (ONS) national mortality register. Cohort inclusion began at diagnosis with follow-up clinical information collected every year from all 220 National Health Service (NHS) HIV outpatient clinics nationwide. To classify causes of death we used a modified Coding Causes of Death in HIV (CoDe) protocol, which uses death certificate data and clinical markers. We applied Kaplan-Meier analysis for survival curves and mortality rate estimation and Cox regression to establish independent predictors of all-cause mortality, adjusting for sex, infection route, age at diagnosis, region of birth, year of diagnosis, late diagnosis, and history of HAART. We used standardised mortality ratios (SMRs) to make comparisons with the general population. FINDINGS: Between 1997 and 2012, 88 994 people were diagnosed with HIV, contributing 448 839 person-years of follow up. By the end of 2012, 5302 (6%) patients had died (all-cause mortality 118 per 10 000 person-years, 95% CI 115–121). In multivariable analysis, late diagnosis was a strong predictor of death (hazard ratio [HR] 3·50, 95% CI 3·13–3·92). People diagnosed more recently had a lower risk of death (2003–07: HR 0·66, 95% CI 0·62–0·70; 2008–12: HR 0·65, 95% CI 0·60–0·71). Cause of death was determinable for 4808 (91%) of 5302 patients; most deaths (2791 [58%] of 4808) were attributable to AIDS-defining illnesses. Cohort mortality was significantly higher than the general population for all causes (SMR 5·7, 95% CI 5·5–5·8), particularly non-AIDS infections (10·8, 9·8–12·0) and liver disease (3·7, 3·3–4·2). All-cause mortality was highest in the year after diagnosis (SMR 24·3, 95% CI 23·4–25·2). INTERPRETATION: Despite the availability of free treatment and care in the UK, AIDS continues to account for the majority of deaths in HIV-positive people, and mortality remains higher in HIV-positive people than in the general population. These findings highlight the importance of prompt diagnosis, care engagement, and optimum management of comorbidities in reducing mortality in people with HIV

    Modeling peptide fragmentation with dynamic Bayesian networks for peptide identification

    Get PDF
    Motivation: Tandem mass spectrometry (MS/MS) is an indispensable technology for identification of proteins from complex mixtures. Proteins are digested to peptides that are then identified by their fragmentation patterns in the mass spectrometer. Thus, at its core, MS/MS protein identification relies on the relative predictability of peptide fragmentation. Unfortunately, peptide fragmentation is complex and not fully understood, and what is understood is not always exploited by peptide identification algorithms

    Where do we diagnose HIV infection? Monitoring new diagnoses made in nontraditional settings in England, Wales and Northern Ireland.

    Get PDF
    OBJECTIVES: The objectives of the study were to describe 10-year trends in HIV diagnosis setting and to explore predictors of being diagnosed outside a sexual health clinic (SHC). METHODS: Analyses of national HIV surveillance data were restricted to adults (aged ≥ 15 years) diagnosed in 2005-2014 in England, Wales and Northern Ireland. Logistic regression identified factors associated with diagnosis outside an SHC (2011-2014). RESULTS: Between 2005 and 2014, 63 599 adults were newly diagnosed with HIV infection; 83% had a diagnosis setting reported. Most people were diagnosed in SHCs (69%) followed by: medical admissions/accident and emergency (A&E; 8.6%), general practice (6.4%), antenatal services (5.5%), out-patient services (3.6%), infectious disease units (2.7%) and other settings (4.0%). The proportion of people diagnosed outside SHCs increased from 2005 to 2014, overall (from 27% to 32%, respectively) and among men who have sex with men (MSM) (from 14% to 21%) and black African men (from 25% to 37%) and women (from 39% to 52%) (all trend P < 0.001). Median CD4 increased across all settings, but was highest in SHCs (384 cells/μL) and lowest in medical admissions/A&E (94 cells/μL). Predictors of being diagnosed outside SHCs included: acquiring HIV through heterosexual contact [adjusted odds ratio (aOR) 1.99; 95% confidence interval (CI) 1.81-2.18] or injecting drug use (aOR: 3.28; 95% CI: 2.56-4.19; reference: MSM), being diagnosed late (< 350 cells/μL) (aOR: 2.55; 95% CI: 2.36-2.74; reference: diagnosed promptly) and being of older age at diagnosis (35-49 years: aOR: 1.60; 95% CI: 1.39-1.83; ≥ 50 years: aOR: 2.48; 95% CI: 2.13-2.88; reference: 15-24 years). CONCLUSIONS: The proportion of HIV diagnoses made outside SHCs has increased over the past decade in line with evolving HIV testing guidelines. However, the rate of late diagnosis remains high, indicating that further expansion of testing is necessary, as many people may have had missed opportunities for earlier diagnosis

    A Geometric Characterization of the Power of Finite Adaptability in Multistage Stochastic and Adaptive Optimization

    Get PDF
    In this paper, we show a significant role that geometric properties of uncertainty sets, such as symmetry, play in determining the power of robust and finitely adaptable solutions in multistage stochastic and adaptive optimization problems. We consider a fairly general class of multistage mixed integer stochastic and adaptive optimization problems and propose a good approximate solution policy with performance guarantees that depend on the geometric properties of the uncertainty sets. In particular, we show that a class of finitely adaptable solutions is a good approximation for both the multistage stochastic and the adaptive optimization problem. A finitely adaptable solution generalizes the notion of a static robust solution and specifies a small set of solutions for each stage; the solution policy implements the best solution from the given set, depending on the realization of the uncertain parameters in past stages. Therefore, it is a tractable approximation to a fully adaptable solution for the multistage problems. To the best of our knowledge, these are the first approximation results for the multistage problem in such generality. Moreover, the results and the proof techniques are quite general and also extend to include important constraints such as integrality and linear conic constraints.National Science Foundation (U.S.) (Grant EFRI-0735905

    Addressing statistical biases in nucleotide-derived protein databases for proteogenomic search strategies

    Get PDF
    [Image: see text] Proteogenomics has the potential to advance genome annotation through high quality peptide identifications derived from mass spectrometry experiments, which demonstrate a given gene or isoform is expressed and translated at the protein level. This can advance our understanding of genome function, discovering novel genes and gene structure that have not yet been identified or validated. Because of the high-throughput shotgun nature of most proteomics experiments, it is essential to carefully control for false positives and prevent any potential misannotation. A number of statistical procedures to deal with this are in wide use in proteomics, calculating false discovery rate (FDR) and posterior error probability (PEP) values for groups and individual peptide spectrum matches (PSMs). These methods control for multiple testing and exploit decoy databases to estimate statistical significance. Here, we show that database choice has a major effect on these confidence estimates leading to significant differences in the number of PSMs reported. We note that standard target:decoy approaches using six-frame translations of nucleotide sequences, such as assembled transcriptome data, apparently underestimate the confidence assigned to the PSMs. The source of this error stems from the inflated and unusual nature of the six-frame database, where for every target sequence there exists five “incorrect” targets that are unlikely to code for protein. The attendant FDR and PEP estimates lead to fewer accepted PSMs at fixed thresholds, and we show that this effect is a product of the database and statistical modeling and not the search engine. A variety of approaches to limit database size and remove noncoding target sequences are examined and discussed in terms of the altered statistical estimates generated and PSMs reported. These results are of importance to groups carrying out proteogenomics, aiming to maximize the validation and discovery of gene structure in sequenced genomes, while still controlling for false positives

    Green tea extract only affects markers of oxidative status postprandially: lasting antioxidant effect of flavonoid-free diet

    Get PDF
    Epidemiological studies suggest that foods rich in flavonoids might reduce the risk of cardiovascular disease and cancer. The objective of the present study was to investigate the effect of green tea extract (GTE) used as a food antioxidant on markers of oxidative status after dietary depletion of flavonoids and catechins. The study was designed as a 2×3 weeks blinded human cross-over intervention study (eight smokers, eight non-smokers) with GTE corresponding to a daily intake of 18·6 mg catechins/d. The GTE was incorporated into meat patties and consumed with a strictly controlled diet otherwise low in flavonoids. GTE intervention increased plasma antioxidant capacity from 1·35 to 1·56 (P<0·02) in postprandially collected plasma, most prominently in smokers. The intervention did not significantly affect markers in fasting blood samples, including plasma or haemoglobin protein oxidation, plasma oxidation lagtime, or activities of the erythrocyte superoxide dismutase, glutathione peroxidase, glutathione reductase and catalase. Neither were fasting plasma triacylglycerol, cholesterol, α-tocopherol, retinol, β-carotene, or ascorbic acid affected by intervention. Urinary 8-oxo-deoxyguanosine excretion was also unaffected. Catechins from the extract were excreted into urine with a half-life of less than 2 h in accordance with the short-term effects on plasma antioxidant capacity. Since no long-term effects of GTE were observed, the study essentially served as a fruit and vegetables depletion study. The overall effect of the 10-week period without dietary fruits and vegetables was a decrease in oxidative damage to DNA, blood proteins, and plasma lipids, concomitantly with marked changes in antioxidative defenc

    Can agricultural cultivation methods influence the healthfulness of crops for foods

    Get PDF
    The aim of the current study was to investigate if there are any health effects of long-term consumption of organically grown crops using a rat model. Crops were retrieved over two years from along-term field trial at three different locations in Denmark, using three different cultivation systems(OA, organic based on livestock manure; OB, organic based on green manure; and C, conventional with mineral fertilizers and pesticides)with two field replicates. The cultivation system had an impact on the nutritional quality, affecting γ-tocopherol, some amino acids, and fatty acid composition. Additionally, the nutritional quality was affected by harvest year and location. However, harvest year and location rather than cultivation system affected the measured health biomarkers. In conclusion, the differences in dietary treatments composed of ingredients from different cultivation systems did not lead to significant differences in the measured health biomarkers, except for a significant difference in plasma IgGl evels

    SARS-CoV-2 infection rates of antibody-positive compared with antibody-negative health-care workers in England: a large, multicentre, prospective cohort study (SIREN).

    Get PDF
    BACKGROUND: Increased understanding of whether individuals who have recovered from COVID-19 are protected from future SARS-CoV-2 infection is an urgent requirement. We aimed to investigate whether antibodies against SARS-CoV-2 were associated with a decreased risk of symptomatic and asymptomatic reinfection. METHODS: A large, multicentre, prospective cohort study was done, with participants recruited from publicly funded hospitals in all regions of England. All health-care workers, support staff, and administrative staff working at hospitals who could remain engaged in follow-up for 12 months were eligible to join The SARS-CoV-2 Immunity and Reinfection Evaluation study. Participants were excluded if they had no PCR tests after enrolment, enrolled after Dec 31, 2020, or had insufficient PCR and antibody data for cohort assignment. Participants attended regular SARS-CoV-2 PCR and antibody testing (every 2-4 weeks) and completed questionnaires every 2 weeks on symptoms and exposures. At enrolment, participants were assigned to either the positive cohort (antibody positive, or previous positive PCR or antibody test) or negative cohort (antibody negative, no previous positive PCR or antibody test). The primary outcome was a reinfection in the positive cohort or a primary infection in the negative cohort, determined by PCR tests. Potential reinfections were clinically reviewed and classified according to case definitions (confirmed, probable, or possible) and symptom-status, depending on the hierarchy of evidence. Primary infections in the negative cohort were defined as a first positive PCR test and seroconversions were excluded when not associated with a positive PCR test. A proportional hazards frailty model using a Poisson distribution was used to estimate incidence rate ratios (IRR) to compare infection rates in the two cohorts. FINDINGS: From June 18, 2020, to Dec 31, 2020, 30 625 participants were enrolled into the study. 51 participants withdrew from the study, 4913 were excluded, and 25 661 participants (with linked data on antibody and PCR testing) were included in the analysis. Data were extracted from all sources on Feb 5, 2021, and include data up to and including Jan 11, 2021. 155 infections were detected in the baseline positive cohort of 8278 participants, collectively contributing 2 047 113 person-days of follow-up. This compares with 1704 new PCR positive infections in the negative cohort of 17 383 participants, contributing 2 971 436 person-days of follow-up. The incidence density was 7·6 reinfections per 100 000 person-days in the positive cohort, compared with 57·3 primary infections per 100 000 person-days in the negative cohort, between June, 2020, and January, 2021. The adjusted IRR was 0·159 for all reinfections (95% CI 0·13-0·19) compared with PCR-confirmed primary infections. The median interval between primary infection and reinfection was more than 200 days. INTERPRETATION: A previous history of SARS-CoV-2 infection was associated with an 84% lower risk of infection, with median protective effect observed 7 months following primary infection. This time period is the minimum probable effect because seroconversions were not included. This study shows that previous infection with SARS-CoV-2 induces effective immunity to future infections in most individuals. FUNDING: Department of Health and Social Care of the UK Government, Public Health England, The National Institute for Health Research, with contributions from the Scottish, Welsh and Northern Irish governments
    corecore