64 research outputs found

    Multiplex giant magnetoresistive biosensor microarrays identify interferon-associated autoantibodies in systemic lupus erythematosus.

    Get PDF
    High titer, class-switched autoantibodies are a hallmark of systemic lupus erythematosus (SLE). Dysregulation of the interferon (IFN) pathway is observed in individuals with active SLE, although the association of specific autoantibodies with chemokine score, a combined measurement of three IFN-regulated chemokines, is not known. To identify autoantibodies associated with chemokine score, we developed giant magnetoresistive (GMR) biosensor microarrays, which allow the parallel measurement of multiple serum antibodies to autoantigens and peptides. We used the microarrays to analyze serum samples from SLE patients and found individuals with high chemokine scores had significantly greater reactivity to 13 autoantigens than individuals with low chemokine scores. Our findings demonstrate that multiple autoantibodies, including antibodies to U1-70K and modified histone H2B tails, are associated with IFN dysregulation in SLE. Further, they show the microarrays are capable of identifying autoantibodies associated with relevant clinical manifestations of SLE, with potential for use as biomarkers in clinical practice

    Fenebrutinib in H1 antihistamine-refractory chronic spontaneous urticaria: a randomized phase 2 trial

    Get PDF
    Bruton’s tyrosine kinase (BTK) is crucial for FcΔRI-mediated mast cell activation and essential for autoantibody production by B cells in chronic spontaneous urticaria (CSU). Fenebrutinib, an orally administered, potent, highly selective, reversible BTK inhibitor, may be effective in CSU. This double-blind, placebo-controlled, phase 2 trial (EudraCT ID 2016-004624-35) randomized 93 adults with antihistamine-refractory CSU to 50 mg daily, 150 mg daily and 200 mg twice daily of fenebrutinib or placebo for 8 weeks. The primary end point was change from baseline in urticaria activity score over 7 d (UAS7) at week 8. Secondary end points were the change from baseline in UAS7 at week 4 and the proportion of patients well-controlled (UAS7 ≀ 6) at week 8. Fenebrutinib efficacy in patients with type IIb autoimmunity and effects on IgG-anti-FcΔRI were exploratory end points. Safety was also evaluated. The primary end point was met, with dose-dependent improvements in UAS7 at week 8 occurring at 200 mg twice daily and 150 mg daily, but not at 50 mg daily of fenebrutinib versus placebo. Asymptomatic, reversible grade 2 and 3 liver transaminase elevations occurred in the fenebrutinib 150 mg daily and 200 mg twice daily groups (2 patients each). Fenebrutinib diminished disease activity in patients with antihistamine-refractory CSU, including more patients with refractory type IIb autoimmunity. These results support the potential use of BTK inhibition in antihistamine-refractory CSU

    A Balance of BMP and Notch Activity Regulates Neurogenesis and Olfactory Nerve Formation

    Get PDF
    Although the function of the adult olfactory system has been thoroughly studied, the molecular mechanisms regulating the initial formation of the olfactory nerve, the first cranial nerve, remain poorly defined. Here, we provide evidence that both modulated Notch and bone morphogenetic protein (BMP) signaling affect the generation of neurons in the olfactory epithelium and reduce the number of migratory neurons, so called epithelioid cells. We show that this reduction of epithelial and migratory neurons is followed by a subsequent failure or complete absence of olfactory nerve formation. These data provide new insights into the early generation of neurons in the olfactory epithelium and the initial formation of the olfactory nerve tract. Our results present a novel mechanism in which BMP signals negatively affect Notch activity in a dominant manner in the olfactory epithelium, thereby regulating neurogenesis and explain why a balance of BMP and Notch activity is critical for the generation of neurons and proper development of the olfactory nerve

    Injury rates and injury risk factors among federal bureau of investigation new agent trainees

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A one-year prospective examination of injury rates and injury risk factors was conducted in Federal Bureau of Investigation (FBI) new agent training.</p> <p>Methods</p> <p>Injury incidents were obtained from medical records and injury compensation forms. Potential injury risk factors were acquired from a lifestyle questionnaire and existing data at the FBI Academy.</p> <p>Results</p> <p>A total of 426 men and 105 women participated in the project. Thirty-five percent of men and 42% of women experienced one or more injuries during training. The injury incidence rate was 2.5 and 3.2 injuries/1,000 person-days for men and women, respectively (risk ratio (women/men) = 1.3, 95% confidence interval = 0.9-1.7). The activities most commonly associated with injuries (% of total) were defensive tactics training (58%), physical fitness training (20%), physical fitness testing (5%), and firearms training (3%). Among the men, higher injury risk was associated with older age, slower 300-meter sprint time, slower 1.5-mile run time, lower total points on the physical fitness test (PFT), lower self-rated physical activity, lower frequency of aerobic exercise, a prior upper or lower limb injury, and prior foot or knee pain that limited activity. Among the women higher injury risk was associated with slower 300-meter sprint time, slower 1.5-mile run time, lower total points on the PFT, and prior back pain that limited activity.</p> <p>Conclusion</p> <p>The results of this investigation supported those of a previous retrospective investigation emphasizing that lower fitness and self-reported pain limiting activity were associated with higher injury risk among FBI new agents.</p

    Global, regional, and national under-5 mortality, adult mortality, age-specific mortality, and life expectancy, 1970–2016: a systematic analysis for the Global Burden of Disease Study 2016

    Get PDF
    BACKGROUND: Detailed assessments of mortality patterns, particularly age-specific mortality, represent a crucial input that enables health systems to target interventions to specific populations. Understanding how all-cause mortality has changed with respect to development status can identify exemplars for best practice. To accomplish this, the Global Burden of Diseases, Injuries, and Risk Factors Study 2016 (GBD 2016) estimated age-specific and sex-specific all-cause mortality between 1970 and 2016 for 195 countries and territories and at the subnational level for the five countries with a population greater than 200 million in 2016. METHODS: We have evaluated how well civil registration systems captured deaths using a set of demographic methods called death distribution methods for adults and from consideration of survey and census data for children younger than 5 years. We generated an overall assessment of completeness of registration of deaths by dividing registered deaths in each location-year by our estimate of all-age deaths generated from our overall estimation process. For 163 locations, including subnational units in countries with a population greater than 200 million with complete vital registration (VR) systems, our estimates were largely driven by the observed data, with corrections for small fluctuations in numbers and estimation for recent years where there were lags in data reporting (lags were variable by location, generally between 1 year and 6 years). For other locations, we took advantage of different data sources available to measure under-5 mortality rates (U5MR) using complete birth histories, summary birth histories, and incomplete VR with adjustments; we measured adult mortality rate (the probability of death in individuals aged 15-60 years) using adjusted incomplete VR, sibling histories, and household death recall. We used the U5MR and adult mortality rate, together with crude death rate due to HIV in the GBD model life table system, to estimate age-specific and sex-specific death rates for each location-year. Using various international databases, we identified fatal discontinuities, which we defined as increases in the death rate of more than one death per million, resulting from conflict and terrorism, natural disasters, major transport or technological accidents, and a subset of epidemic infectious diseases; these were added to estimates in the relevant years. In 47 countries with an identified peak adult prevalence for HIV/AIDS of more than 0·5% and where VR systems were less than 65% complete, we informed our estimates of age-sex-specific mortality using the Estimation and Projection Package (EPP)-Spectrum model fitted to national HIV/AIDS prevalence surveys and antenatal clinic serosurveillance systems. We estimated stillbirths, early neonatal, late neonatal, and childhood mortality using both survey and VR data in spatiotemporal Gaussian process regression models. We estimated abridged life tables for all location-years using age-specific death rates. We grouped locations into development quintiles based on the Socio-demographic Index (SDI) and analysed mortality trends by quintile. Using spline regression, we estimated the expected mortality rate for each age-sex group as a function of SDI. We identified countries with higher life expectancy than expected by comparing observed life expectancy to anticipated life expectancy on the basis of development status alone. FINDINGS: Completeness in the registration of deaths increased from 28% in 1970 to a peak of 45% in 2013; completeness was lower after 2013 because of lags in reporting. Total deaths in children younger than 5 years decreased from 1970 to 2016, and slower decreases occurred at ages 5-24 years. By contrast, numbers of adult deaths increased in each 5-year age bracket above the age of 25 years. The distribution of annualised rates of change in age-specific mortality rate differed over the period 2000 to 2016 compared with earlier decades: increasing annualised rates of change were less frequent, although rising annualised rates of change still occurred in some locations, particularly for adolescent and younger adult age groups. Rates of stillbirths and under-5 mortality both decreased globally from 1970. Evidence for global convergence of death rates was mixed; although the absolute difference between age-standardised death rates narrowed between countries at the lowest and highest levels of SDI, the ratio of these death rates-a measure of relative inequality-increased slightly. There was a strong shift between 1970 and 2016 toward higher life expectancy, most noticeably at higher levels of SDI. Among countries with populations greater than 1 million in 2016, life expectancy at birth was highest for women in Japan, at 86·9 years (95% UI 86·7-87·2), and for men in Singapore, at 81·3 years (78·8-83·7) in 2016. Male life expectancy was generally lower than female life expectancy between 1970 and 2016, an

    General anaesthetic and airway management practice for obstetric surgery in England: a prospective, multi-centre observational study

    Get PDF
    There are no current descriptions of general anaesthesia characteristics for obstetric surgery, despite recent changes to patient baseline characteristics and airway management guidelines. This analysis of data from the direct reporting of awareness in maternity patients' (DREAMY) study of accidental awareness during obstetric anaesthesia aimed to describe practice for obstetric general anaesthesia in England and compare with earlier surveys and best-practice recommendations. Consenting patients who received general anaesthesia for obstetric surgery in 72 hospitals from May 2017 to August 2018 were included. Baseline characteristics, airway management, anaesthetic techniques and major complications were collected. Descriptive analysis, binary logistic regression modelling and comparisons with earlier data were conducted. Data were collected from 3117 procedures, including 2554 (81.9%) caesarean deliveries. Thiopental was the induction drug in 1649 (52.9%) patients, compared with propofol in 1419 (45.5%). Suxamethonium was the neuromuscular blocking drug for tracheal intubation in 2631 (86.1%), compared with rocuronium in 367 (11.8%). Difficult tracheal intubation was reported in 1 in 19 (95%CI 1 in 16-22) and failed intubation in 1 in 312 (95%CI 1 in 169-667). Obese patients were over-represented compared with national baselines and associated with difficult, but not failed intubation. There was more evidence of change in practice for induction drugs (increased use of propofol) than neuromuscular blocking drugs (suxamethonium remains the most popular). There was evidence of improvement in practice, with increased monitoring and reversal of neuromuscular blockade (although this remains suboptimal). Despite a high risk of difficult intubation in this population, videolaryngoscopy was rarely used (1.9%)
    • 

    corecore