59 research outputs found

    Prediction of the binding affinities of peptides to class II MHC using a regularized thermodynamic model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The binding of peptide fragments of extracellular peptides to class II MHC is a crucial event in the adaptive immune response. Each MHC allotype generally binds a distinct subset of peptides and the enormous number of possible peptide epitopes prevents their complete experimental characterization. Computational methods can utilize the limited experimental data to predict the binding affinities of peptides to class II MHC.</p> <p>Results</p> <p>We have developed the Regularized Thermodynamic Average, or RTA, method for predicting the affinities of peptides binding to class II MHC. RTA accounts for all possible peptide binding conformations using a thermodynamic average and includes a parameter constraint for regularization to improve accuracy on novel data. RTA was shown to achieve higher accuracy, as measured by AUC, than SMM-align on the same data for all 17 MHC allotypes examined. RTA also gave the highest accuracy on all but three allotypes when compared with results from 9 different prediction methods applied to the same data. In addition, the method correctly predicted the peptide binding register of 17 out of 18 peptide-MHC complexes. Finally, we found that suboptimal peptide binding registers, which are often ignored in other prediction methods, made significant contributions of at least 50% of the total binding energy for approximately 20% of the peptides.</p> <p>Conclusions</p> <p>The RTA method accurately predicts peptide binding affinities to class II MHC and accounts for multiple peptide binding registers while reducing overfitting through regularization. The method has potential applications in vaccine design and in understanding autoimmune disorders. A web server implementing the RTA prediction method is available at <url>http://bordnerlab.org/RTA/</url>.</p

    Sotigalimab and/or nivolumab with chemotherapy in first-line metastatic pancreatic cancer: clinical and immunologic analyses from the randomized phase 2 PRINCE trial

    Get PDF
    Chemotherapy combined with immunotherapy has improved the treatment of certain solid tumors, but effective regimens remain elusive for pancreatic ductal adenocarcinoma (PDAC). We conducted a randomized phase 2 trial evaluating the efficacy of nivolumab (nivo; anti-PD-1) and/or sotigalimab (sotiga; CD40 agonistic antibody) with gemcitabine/nab-paclitaxel (chemotherapy) in patients with first-line metastatic PDAC (NCT03214250). In 105 patients analyzed for efficacy, the primary endpoint of 1-year overall survival (OS) was met for nivo/chemo (57.7%, P = 0.006 compared to historical 1-year OS of 35%, n = 34) but was not met for sotiga/chemo (48.1%, P = 0.062, n = 36) or sotiga/nivo/chemo (41.3%, P = 0.223, n = 35). Secondary endpoints were progression-free survival, objective response rate, disease control rate, duration of response and safety. Treatment-related adverse event rates were similar across arms. Multi-omic circulating and tumor biomarker analyses identified distinct immune signatures associated with survival for nivo/chemo and sotiga/chemo. Survival after nivo/chemo correlated with a less suppressive tumor microenvironment and higher numbers of activated, antigen-experienced circulating T cells at baseline. Survival after sotiga/chemo correlated with greater intratumoral CD4 T cell infiltration and circulating differentiated CD4 T cells and antigen-presenting cells. A patient subset benefitting from sotiga/nivo/chemo was not identified. Collectively, these analyses suggest potential treatment-specific correlates of efficacy and may enable biomarker-selected patient populations in subsequent PDAC chemoimmunotherapy trials

    Denotative and Connotative Semantics in Hypermedia: Proposal for a Semiotic-Aware Architecture

    Get PDF
    In this article we claim that the linguistic-centered view within hypermedia systems needs refinement through a semiotic-based approach before real interoperation between media can be achieved. We discuss the problems of visual signification for images and video in dynamic systems, in which users can access visual material in a non-linear fashion. We describe how semiotics can help overcome such problems, by allowing descriptions of the material on both denotative and connotative levels. Finally we propose an architecture for a dynamic semiotic-aware hypermedia system

    Towards Universal Structure-Based Prediction of Class II MHC Epitopes for Diverse Allotypes

    Get PDF
    The binding of peptide fragments of antigens to class II MHC proteins is a crucial step in initiating a helper T cell immune response. The discovery of these peptide epitopes is important for understanding the normal immune response and its misregulation in autoimmunity and allergies and also for vaccine design. In spite of their biomedical importance, the high diversity of class II MHC proteins combined with the large number of possible peptide sequences make comprehensive experimental determination of epitopes for all MHC allotypes infeasible. Computational methods can address this need by predicting epitopes for a particular MHC allotype. We present a structure-based method for predicting class II epitopes that combines molecular mechanics docking of a fully flexible peptide into the MHC binding cleft followed by binding affinity prediction using a machine learning classifier trained on interaction energy components calculated from the docking solution. Although the primary advantage of structure-based prediction methods over the commonly employed sequence-based methods is their applicability to essentially any MHC allotype, this has not yet been convincingly demonstrated. In order to test the transferability of the prediction method to different MHC proteins, we trained the scoring method on binding data for DRB1*0101 and used it to make predictions for multiple MHC allotypes with distinct peptide binding specificities including representatives from the other human class II MHC loci, HLA-DP and HLA-DQ, as well as for two murine allotypes. The results showed that the prediction method was able to achieve significant discrimination between epitope and non-epitope peptides for all MHC allotypes examined, based on AUC values in the range 0.632–0.821. We also discuss how accounting for peptide binding in multiple registers to class II MHC largely explains the systematically worse performance of prediction methods for class II MHC compared with those for class I MHC based on quantitative prediction performance estimates for peptide binding to class II MHC in a fixed register

    Financial Characteristics of Companies Audited by Large Audit Firms

    Get PDF
    Purpose “ The purpose of this paper is to examine how financial characteristics associated with the choice of a big audit firm with further investigation on the agency costs of free cash flows.Design/methodology/approach “ The sample used for this work includes industrial listed companies from Germany and France. To test our hypothesis, we used a number of logit models, extending the standard model selection audit firm, to include the variables of interest. Following previous work, our dependent dummy variable is Big4 or non-Big4.Findings “ We observed that most independent variables in the German companies show similar results to previous work, but we did not have the same results for the French industry. Moreover, our findings suggest that the total debt and dividends can be an important reason for determining the choice of a large audit firm, reducing agency costs of free cash flows.Research limitations/implications “ This study has some limitations on the measurements of the cost of the audit fees and also generates opportunities for additional searching.Originality/value “ The paper provides only one aspect to explain the relationship between the problems of agency costs of free cash flow and influence in choosing a large auditing firm, which stems from investors\u27 demand for higher quality audits

    ENIGMA and global neuroscience: A decade of large-scale studies of the brain in health and disease across more than 40 countries

    Get PDF
    This review summarizes the last decade of work by the ENIGMA (Enhancing NeuroImaging Genetics through Meta Analysis) Consortium, a global alliance of over 1400 scientists across 43 countries, studying the human brain in health and disease. Building on large-scale genetic studies that discovered the first robustly replicated genetic loci associated with brain metrics, ENIGMA has diversified into over 50 working groups (WGs), pooling worldwide data and expertise to answer fundamental questions in neuroscience, psychiatry, neurology, and genetics. Most ENIGMA WGs focus on specific psychiatric and neurological conditions, other WGs study normal variation due to sex and gender differences, or development and aging; still other WGs develop methodological pipelines and tools to facilitate harmonized analyses of "big data" (i.e., genetic and epigenetic data, multimodal MRI, and electroencephalography data). These international efforts have yielded the largest neuroimaging studies to date in schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder, substance use disorders, obsessive-compulsive disorder, attention-deficit/hyperactivity disorder, autism spectrum disorders, epilepsy, and 22q11.2 deletion syndrome. More recent ENIGMA WGs have formed to study anxiety disorders, suicidal thoughts and behavior, sleep and insomnia, eating disorders, irritability, brain injury, antisocial personality and conduct disorder, and dissociative identity disorder. Here, we summarize the first decade of ENIGMA's activities and ongoing projects, and describe the successes and challenges encountered along the way. We highlight the advantages of collaborative large-scale coordinated data analyses for testing reproducibility and robustness of findings, offering the opportunity to identify brain systems involved in clinical syndromes across diverse samples and associated genetic, environmental, demographic, cognitive, and psychosocial factors

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    Rare coding variants in PLCG2, ABI3, and TREM2 implicate microglial-mediated innate immunity in Alzheimer's disease

    Get PDF
    We identified rare coding variants associated with Alzheimer’s disease (AD) in a 3-stage case-control study of 85,133 subjects. In stage 1, 34,174 samples were genotyped using a whole-exome microarray. In stage 2, we tested associated variants (P<1×10-4) in 35,962 independent samples using de novo genotyping and imputed genotypes. In stage 3, an additional 14,997 samples were used to test the most significant stage 2 associations (P<5×10-8) using imputed genotypes. We observed 3 novel genome-wide significant (GWS) AD associated non-synonymous variants; a protective variant in PLCG2 (rs72824905/p.P522R, P=5.38×10-10, OR=0.68, MAFcases=0.0059, MAFcontrols=0.0093), a risk variant in ABI3 (rs616338/p.S209F, P=4.56×10-10, OR=1.43, MAFcases=0.011, MAFcontrols=0.008), and a novel GWS variant in TREM2 (rs143332484/p.R62H, P=1.55×10-14, OR=1.67, MAFcases=0.0143, MAFcontrols=0.0089), a known AD susceptibility gene. These protein-coding changes are in genes highly expressed in microglia and highlight an immune-related protein-protein interaction network enriched for previously identified AD risk genes. These genetic findings provide additional evidence that the microglia-mediated innate immune response contributes directly to AD development

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
    corecore