7 research outputs found

    Thrittene, homologous with somatostatin-28(1-13), is a novel peptide in mammalian gut and circulation

    Get PDF
    Preprosomatostatin is a gene expressed ubiquitously among vertebrates, and at least two duplications of this gene have occurred during evolution. Somatostatin-28 (S-28) and somatostatin-14 (S-14), C-terminal products of prosomatostatin (ProS), are differentially expressed in mammalian neurons, D cells, and enterocytes. One pathway for the generation of S-14 entails the excision of Arg13-Lys14 in S-28, leading to equivalent amounts of S-28(1-12). Using an antiserum (F-4), directed to the N-terminal region of S-28 that does not react with S-28(1-12), we detected a peptide, in addition to S-28 and ProS, that was present in human plasma and in the intestinal tract of rats and monkeys. This F-4 reacting peptide was purified from monkey ileum; and its amino acid sequence, molecular mass, and chromatographic characteristics conformed to those of S-28(1-13), a peptide not described heretofore. When extracts of the small intestine were measured by RIA, there was a discordance in the ratio of peptides reacting with F-4 and those containing the C terminus of ProS, suggesting sites of synthesis for S-28(1-13) distinct from those for S-14 and S-28. This was supported by immunocytochemistry, wherein F-4 reactivity was localized in gastrointestinal (GI) endocrine cells and a widespread plexus of neurons within the wall of the distal gut while immunoreactivity to C-terminal domains of S-14 and S-28 in these neurons was absent. Further, F-4 immunoreactivity persisted in similar GI endocrine cells and myenteric neurons in mice with a targeted deletion of the preprosomatostatin gene. We believe that these data suggest a novel peptide produced in the mammalian gut, homologous with the 13 residues of the proximal region of S-28 but not derived from the Pros gene. Pending characterization of the gene from which this peptide is derived, its distribution, and function, we have designated this peptide as thrittene. Its localization in both GI endocrine cells and gut neurons suggests that thrittene may function as both a hormone and neurotransmitter.Fil: Ensinck, John W.. University of Washington; Estados UnidosFil: Baskin, Denis G.. University of Washington; Estados Unidos. Department of Veterans Affairs Puget Sound Health Care System; Estados UnidosFil: Vahl, Torsten P.. University of Cincinnati; Estados UnidosFil: Vogel, Robin E.. University of Washington; Estados UnidosFil: Laschansky, Ellen C.. University of Washington; Estados UnidosFil: Francis, Bruce H.. University of Washington; Estados UnidosFil: Hoffman, Ross C.. ZymoGenetics; Estados UnidosFil: Krakover, Jonathan D.. ZymoGenetics; Estados UnidosFil: Stamm, Michael R.. ZymoGenetics; Estados UnidosFil: Low, Malcolm J.. Oregon Health Sciences University; Estados UnidosFil: Rubinstein, Marcelo. Oregon Health Sciences University; Estados Unidos. Consejo Nacional de Investigaciones Científicas y Técnicas. Instituto de Investigaciones en Ingeniería Genética y Biología Molecular "Dr. Héctor N. Torres"; ArgentinaFil: Otero Corchon, Veronica. Oregon Health Sciences University; Estados UnidosFil: D'Alessio, David A.. University of Cincinnati; Estados Unido

    Rational Design of a Multiepitope Vaccine Encoding T-Lymphocyte Epitopes for Treatment of Chronic Hepatitis B Virus Infectionsâ–ż

    No full text
    Protein sequences from multiple hepatitis B virus (HBV) isolates were analyzed for the presence of amino acid motifs characteristic of cytotoxic T-lymphocyte (CTL) and helper T-lymphocyte (HTL) epitopes with the goal of identifying conserved epitopes suitable for use in a therapeutic vaccine. Specifically, sequences bearing HLA-A1, -A2, -A3, -A24, -B7, and -DR supertype binding motifs were identified, synthesized as peptides, and tested for binding to soluble HLA. The immunogenicity of peptides that bound with moderate to high affinity subsequently was assessed using HLA transgenic mice (CTL) and HLA cross-reacting H-2bxd (BALB/c Ă— C57BL/6J) mice (HTL). Through this process, 30 CTL and 16 HTL epitopes were selected as a set that would be the most useful for vaccine design, based on epitope conservation among HBV sequences and HLA-based predicted population coverage in diverse ethnic groups. A plasmid DNA-based vaccine encoding the epitopes as a single gene product, with each epitope separated by spacer residues to enhance appropriate epitope processing, was designed. Immunogenicity testing in mice demonstrated the induction of multiple CTL and HTL responses. Furthermore, as a complementary approach, mass spectrometry allowed the identification of correctly processed and major histocompatibility complex-presented epitopes from human cells transfected with the DNA plasmid. A heterologous prime-boost immunization with the plasmid DNA and a recombinant MVA gave further enhancement of the immune responses. Thus, a multiepitope therapeutic vaccine candidate capable of stimulating those cellular immune responses thought to be essential for controlling and clearing HBV infection was successfully designed and evaluated in vitro and in HLA transgenic mice

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore