91 research outputs found

    Susceptibility and status of avian influenza in ostriches

    Get PDF
    The extensive nature of ostrich farming production systems bears the continual risk of point introductions of avian influenza virus (AIV) from wild birds, but immune status, management, population density, and other causes of stress in ostriches are the ultimate determinants of the severity of the disease in this species. From January 2012 to December 2014, more than 70 incidents of AIV in ostriches were reported in South Africa. These included H5N2 and H7N1 low pathogenicity avian influenza (LPAI) in 2012, H7N7 LPAI in 2013, and H5N2 LPAI in 2014. To resolve the molecular epidemiology in South Africa, the entire South African viral repository from ostriches and wild birds from 1991 to 2013 (n = 42) was resequenced by next-generation sequencing technology to obtain complete genomes for comparison. The phylogenetic results were supplemented with serological data for ostriches from 2012 to 2014, and AIV-detection data from surveillance of 17?762 wild birds sampled over the same period. Phylogenetic evidence pointed to wild birds, e.g., African sacred ibis (Threskiornis aethiopicus), in the dissemination of H7N1 LPAI to ostriches in the Eastern and Western Cape provinces during 2012, in separate incidents that could not be epidemiologically linked. In contrast, the H7N7 LPAI outbreaks in 2013 that were restricted to the Western Cape Province appear to have originated from a single-point introduction from wild birds. Two H5N2 viruses detected in ostriches in 2012 were determined to be LPAI strains that were new introductions, epidemiologically unrelated to the 2011 highly pathogenic avian influenza (HPAI) outbreaks. Seventeen of 27 (63%) ostrich viruses contained the polymerase basic 2 (PB2) E627K marker, and 2 of the ostrich isolates that lacked E627K contained the compensatory Q591K mutation, whereas a third virus had a D701N mutation. Ostriches maintain a low upper- to midtracheal temperature as part of their adaptive physiology for desert survival, which may explain the selection in ratites for E627K or its compensatory mutations markers that facilitate AIV replication at lower temperatures. An AIV prevalence of 5.6% in wild birds was recorded between 2012 and 2014, considerably higher than AIV prevalence for the southern African region of 2.5% 3.6% reported in the period 2007 2009. Serological prevalence of AI in ostriches was 3.7%, 3.6%, and 6.1% for 2012, 2013, and 2014, respectively. An annual seasonal dip in incidence was evident around March/April (late summer/autumn), with peaks around July/August (mid to late winter). H5, H6, H7, and unidentified serotypes were present at varying levels over the 3-yr period.La extensa naturaleza de los sistemas de producción de avestruz enfrenta el riesgo continuo de la presentación del virus de la influenza aviar (AIV) originado de aves silvestres, pero el estado inmunológico, el manejo, la densidad de población, y otras causas de estrés en avestruces son determinantes importantes en la severidad de esta enfermedad en esta especie. De enero del 2012 a diciembre del 2014, se registraron más de 70 casos de virus de influenza aviar en avestruces en Sudáfrica. Estos virus incluyeron virus de baja patogenicidad H5N2 y H7N1 en el año 2012, virus de baja patogenicidad H7N7 en el año 2013 y virus de baja patogenicidad H5N2 en 2014. Para resolver la epidemiología molecular en Sudáfrica, todo el repositorio de muestras virales de avestruces y aves silvestres en Sudáfrica del año 1991 al 2013 (n = 42) fue re-analizado por análisis de secuencias de próxima generación para obtener genomas completos para su comparación. Los resultados filogenéticos se complementaron con datos serológicos para avestruces del año 2012 al 2014, y con los datos de detección en la vigilancia del virus de influenza aviar de 17?762 aves silvestres muestreadas durante el mismo período. La evidencia filogenética señaló el papel de aves silvestres como los, ibis sagrados africanos (Threskiornis aethiopicus), en la difusión de virus de influenza de baja patogenicidad H7N1 a las avestruces en las provincias del Este y del Cabo Occidental durante el año 2012, en incidentes separados que no pudieron ser relacionados epidemiológicamente. Por el contrario, los brotes con virus H7N7 de baja patogenicidad en el año 2013, que se limitaban a la Provincia Occidental del Cabo parecen haberse originado a partir de una introducción de un solo punto de aves silvestres. Se determinó que dos virus H5N2 detectados en avestruces en el año 2012 de baja patogenicidad eran introducciones nuevas, que no estaban relacionadas epidemiológicamente con los brotes de influenza aviar en el año 2011. Diecisiete de 27 (63%) virus de avestruces contenían el marcador PB2 E627K, y dos de los aislados de avestruz que carecían del marcador E627K contenían la mutación compensatoria Q591K, mientras que un tercer virus tenía una mutación D701N. Las avestruces mantienen una temperatura baja en la parte media y baja de la tráquea como parte de su fisiología de adaptación para sobrevivir en el desierto, lo que puede explicar la selección de las ratites para la mutación E627K o sus mutaciones compensatorias que son marcadores que facilitan la replicación del virus de influenza aviar a temperaturas más bajas. Se registró una prevalencia del virus de influenza aviar de 5.6% en las aves silvestres, entre 2012 y 2014, considerablemente más alta que la prevalencia del virus de influenza aviar de la región de África meridional de 2.5% ?3.6% reportada en el periodo entre los años 2007-2009. La prevalencia serológica de la influenza aviar en avestruces fue del 3.7%, 3.6% y 6.1% para los años 2012, 2013 y 2014, respectivamente. Fue evidente una caída estacional en la incidencia anual alrededor de Marzo y Abril (finales de verano/otoño), con picos alrededor de Julio y Agosto (mediados a finales de invierno). Los subtipos H5, H6, H7, y serotipos no identificados estuvieron presentes en diferentes niveles durante el período de tres años.http://www.aaapjournals.info/loi/avdiProduction Animal Studie

    Treatment decision-making and the form of risk communication: results of a factorial survey

    Get PDF
    BACKGROUND: Prospective users of preventive therapies often must evaluate complex information about therapeutic risks and benefits. The purpose of this study was to evaluate the effect of relative and absolute risk information on patient decision-making in scenarios typical of health information for patients. METHODS: Factorial experiments within a telephone survey of the Michigan adult, non-institutionalized, English-speaking population. Average interview lasted 23 minutes. Subjects and sample design: 952 randomly selected adults within a random-digit dial sample of Michigan households. Completion rate was 54.3%. RESULTS: When presented hypothetical information regarding additional risks of breast cancer from a medication to prevent a bone disease, respondents reduced their willingness to recommend a female friend take the medication compared to the baseline rate (66.8% = yes). The decrease was significantly greater with relative risk information. Additional benefit information regarding preventing heart disease from the medication increased willingness to recommend the medication to a female friend relative to the baseline scenario, but did not differ between absolute and relative risk formats. When information about both increased risk of breast cancer and reduced risk of heart disease were provided, typical respondents appeared to make rational decisions consistent with Expected Utility Theory, but the information presentation format affected choices. Those 11% – 33% making decisions contrary to the medical indications were more likely to be Hispanic, older, more educated, smokers, and to have children in the home. CONCLUSIONS: In scenarios typical of health risk information, relative risk information led respondents to make non-normative decisions that were "corrected" when the frame used absolute risk information. This population sample made generally rational decisions when presented with absolute risk information, even in the context of a telephone interview requiring remembering rates given. The lack of effect of gender and race suggests that a standard strategy of presenting absolute risk information may improve patient decision-making

    International consensus guidelines for scoring the histopathological growth patterns of liver metastasis

    Get PDF
    BACKGROUND: Liver metastases present with distinct histopathological growth patterns (HGPs), including the desmoplastic, pushing and replacement HGPs and two rarer HGPs. The HGPs are defined owing to the distinct interface between the cancer cells and the adjacent normal liver parenchyma that is present in each pattern and can be scored from standard haematoxylin-and-eosin-stained (H&E) tissue sections. The current study provides consensus guidelines for scoring these HGPs. METHODS: Guidelines for defining the HGPs were established by a large international team. To assess the validity of these guidelines, 12 independent observers scored a set of 159 liver metastases and interobserver variability was measured. In an independent cohort of 374 patients with colorectal liver metastases (CRCLM), the impact of HGPs on overall survival after hepatectomy was determined. RESULTS: Good-to-excellent correlations (intraclass correlation coefficient >0.5) with the gold standard were obtained for the assessment of the replacement HGP and desmoplastic HGP. Overall survival was significantly superior in the desmoplastic HGP subgroup compared with the replacement or pushing HGP subgroup (P=0.006). CONCLUSIONS: The current guidelines allow for reproducible determination of liver metastasis HGPs. As HGPs impact overall survival after surgery for CRCLM, they may serve as a novel biomarker for individualised therapies

    Country, Sex, EDSS Change and Therapy Choice Independently Predict Treatment Discontinuation in Multiple Sclerosis and Clinically Isolated Syndrome

    Get PDF
    We conducted a prospective study, MSBASIS, to assess factors leading to first treatment discontinuation in patients with a clinically isolated syndrome (CIS) and early relapsing-remitting multiple sclerosis (RRMS). The MSBASIS Study, conducted by MSBase Study Group members, enrols patients seen from CIS onset, reporting baseline demographics, cerebral magnetic resonance imaging (MRI) features and Expanded Disability Status Scale (EDSS) scores. Follow-up visits report relapses, EDSS scores, and the start and end dates of MS-specific therapies. We performed a multivariable survival analysis to determine factors within this dataset that predict first treatment discontinuation. A total of 2314 CIS patients from 44 centres were followed for a median of 2.7 years, during which time 1247 commenced immunomodulatory drug (IMD) treatment. Ninety percent initiated IMD after a diagnosis of MS was confirmed, and 10% while still in CIS status. Over 40% of these patients stopped their first IMD during the observation period. Females were more likely to cease medication than males (HR 1.36, p = 0.003). Patients treated in Australia were twice as likely to cease their first IMD than patients treated in Spain (HR 1.98, p = 0.001). Increasing EDSS was associated with higher rate of IMD cessation (HR 1.21 per EDSS unit, p<0.001), and intramuscular interferon-β-1a (HR 1.38, p = 0.028) and subcutaneous interferon-β-1a (HR 1.45, p = 0.012) had higher rates of discontinuation than glatiramer acetate, although this varied widely in different countries. Onset cerebral MRI features, age, time to treatment initiation or relapse on treatment were not associated with IMD cessation. In this multivariable survival analysis, female sex, country of residence, EDSS change and IMD choice independently predicted time to first IMD cessation

    Symptom-based stratification of patients with primary Sjögren's syndrome: multi-dimensional characterisation of international observational cohorts and reanalyses of randomised clinical trials

    Get PDF
    Background Heterogeneity is a major obstacle to developing effective treatments for patients with primary Sjögren's syndrome. We aimed to develop a robust method for stratification, exploiting heterogeneity in patient-reported symptoms, and to relate these differences to pathobiology and therapeutic response. Methods We did hierarchical cluster analysis using five common symptoms associated with primary Sjögren's syndrome (pain, fatigue, dryness, anxiety, and depression), followed by multinomial logistic regression to identify subgroups in the UK Primary Sjögren's Syndrome Registry (UKPSSR). We assessed clinical and biological differences between these subgroups, including transcriptional differences in peripheral blood. Patients from two independent validation cohorts in Norway and France were used to confirm patient stratification. Data from two phase 3 clinical trials were similarly stratified to assess the differences between subgroups in treatment response to hydroxychloroquine and rituximab. Findings In the UKPSSR cohort (n=608), we identified four subgroups: Low symptom burden (LSB), high symptom burden (HSB), dryness dominant with fatigue (DDF), and pain dominant with fatigue (PDF). Significant differences in peripheral blood lymphocyte counts, anti-SSA and anti-SSB antibody positivity, as well as serum IgG, κ-free light chain, β2-microglobulin, and CXCL13 concentrations were observed between these subgroups, along with differentially expressed transcriptomic modules in peripheral blood. Similar findings were observed in the independent validation cohorts (n=396). Reanalysis of trial data stratifying patients into these subgroups suggested a treatment effect with hydroxychloroquine in the HSB subgroup and with rituximab in the DDF subgroup compared with placebo. Interpretation Stratification on the basis of patient-reported symptoms of patients with primary Sjögren's syndrome revealed distinct pathobiological endotypes with distinct responses to immunomodulatory treatments. Our data have important implications for clinical management, trial design, and therapeutic development. Similar stratification approaches might be useful for patients with other chronic immune-mediated diseases. Funding UK Medical Research Council, British Sjogren's Syndrome Association, French Ministry of Health, Arthritis Research UK, Foundation for Research in Rheumatology

    Blood transcriptional biomarkers of acute viral infection for detection of pre-symptomatic SARS-CoV-2 infection: a nested, case-control diagnostic accuracy study

    Get PDF
    Background We hypothesised that host-response biomarkers of viral infections might contribute to early identification of individuals infected with SARS-CoV-2, which is critical to breaking the chains of transmission. We aimed to evaluate the diagnostic accuracy of existing candidate whole-blood transcriptomic signatures for viral infection to predict positivity of nasopharyngeal SARS-CoV-2 PCR testing.Methods We did a nested case-control diagnostic accuracy study among a prospective cohort of health-care workers (aged ≥18 years) at St Bartholomew’s Hospital (London, UK) undergoing weekly blood and nasopharyngeal swab sampling for whole-blood RNA sequencing and SARS-CoV-2 PCR testing, when fit to attend work. We identified candidate blood transcriptomic signatures for viral infection through a systematic literature search. We searched MEDLINE for articles published between database inception and Oct 12, 2020, using comprehensive MeSH and keyword terms for “viral infection”, “transcriptome”, “biomarker”, and “blood”. We reconstructed signature scores in blood RNA sequencing data and evaluated their diagnostic accuracy for contemporaneous SARS-CoV-2 infection, compared with the gold standard of SARS-CoV-2 PCR testing, by quantifying the area under the receiver operating characteristic curve (AUROC), sensitivities, and specificities at a standardised Z score of at least 2 based on the distribution of signature scores in test-negative controls. We used pairwise DeLong tests compared with the most discriminating signature to identify the subset of best performing biomarkers. We evaluated associations between signature expression, viral load (using PCR cycle thresholds), and symptom status visually and using Spearman rank correlation. The primary outcome was the AUROC for discriminating between samples from participants who tested negative throughout the study (test-negative controls) and samples from participants with PCR-confirmed SARS-CoV-2 infection (test-positive participants) during their first week of PCR positivity.Findings We identified 20 candidate blood transcriptomic signatures of viral infection from 18 studies and evaluated their accuracy among 169 blood RNA samples from 96 participants over 24 weeks. Participants were recruited between March 23 and March 31, 2020. 114 samples were from 41 participants with SARS-CoV-2 infection, and 55 samples were from 55 test-negative controls. The median age of participants was 36 years (IQR 27–47) and 69 (72%) of 96 were women. Signatures had little overlap of component genes, but were mostly correlated as components of type I interferon responses. A single blood transcript for IFI27 provided the highest accuracy for discriminating between test-negative controls and test-positive individuals at the time of their first positive SARS-CoV-2 PCR result, with AUROC of 0·95 (95% CI 0·91–0·99), sensitivity 0·84 (0·70–0·93), and specificity 0·95 (0·85–0·98) at a predefined threshold (Z score >2). The transcript performed equally well in individuals with and without symptoms. Three other candidate signatures (including two to 48 transcripts) had statistically equivalent discrimination to IFI27 (AUROCs 0·91–0·95).Interpretation Our findings support further urgent evaluation and development of blood IFI27 transcripts as a biomarker for early phase SARS-CoV-2 infection for screening individuals at high risk of infection, such as contacts of index cases, to facilitate early case isolation and early use of antiviral treatments as they emerge

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    corecore