10 research outputs found

    Ozone Measurements with Meteors: A Revisit

    Get PDF
    Understanding the role of ozone in the Mesosphere/Lower Thermosphere (MLT) region is essential for understanding the atmospheric processes in the upper atmosphere. Earlier studies have shown that it is possible to use overdense meteor trails to measure ozone concentration in the meteor region. Here we revisit this topic by comparing a compilation of radar observations to satellite measurements. We observe a modest agreement between the values derived from these two methods, which confirm the usefulness of the meteor trail technique for measuring ozone content at certain heights in the MLT region. Future simultaneous measurements will help quantifying the performance of this technique.Comment: MNRAS in pres

    Distinct regulatory networks control toxin gene expression in elapid and viperid snakes

    Get PDF
    Background Venom systems are ideal models to study genetic regulatory mechanisms that underpin evolutionary novelty. Snake venom glands are thought to share a common origin, but there are major distinctions between venom toxins from the medically significant snake families Elapidae and Viperidae, and toxin gene regulatory investigations in elapid snakes have been limited. Here, we used high-throughput RNA-sequencing to profile gene expression and microRNAs between active (milked) and resting (unmilked) venom glands in an elapid (Eastern Brown Snake, Pseudonaja textilis), in addition to comparative genomics, to identify cis- and trans-acting regulation of venom production in an elapid in comparison to viperids (Crotalus viridis and C. tigris). Results Although there is conservation in high-level mechanistic pathways regulating venom production (unfolded protein response, Notch signaling and cholesterol homeostasis), there are differences in the regulation of histone methylation enzymes, transcription factors, and microRNAs in venom glands from these two snake families. Histone methyltransferases and transcription factor (TF) specificity protein 1 (Sp1) were highly upregulated in the milked elapid venom gland in comparison to the viperids, whereas nuclear factor I (NFI) TFs were upregulated after viperid venom milking. Sp1 and NFI cis-regulatory elements were common to toxin gene promoter regions, but many unique elements were also present between elapid and viperid toxins. The presence of Sp1 binding sites across multiple elapid toxin gene promoter regions that have been experimentally determined to regulate expression, in addition to upregulation of Sp1 after venom milking, suggests this transcription factor is involved in elapid toxin expression. microRNA profiles were distinctive between milked and unmilked venom glands for both snake families, and microRNAs were predicted to target a diversity of toxin transcripts in the elapid P. textilis venom gland, but only snake venom metalloproteinase transcripts in the viperid C. viridis venom gland. These results suggest differences in toxin gene posttranscriptional regulation between the elapid P. textilis and viperid C. viridis. Conclusions Our comparative transcriptomic and genomic analyses between toxin genes and isoforms in elapid and viperid snakes suggests independent toxin regulation between these two snake families, demonstrating multiple different regulatory mechanisms underpin a venomous phenotype

    The molecular structures of starch components and their contribution to the architecture of starch granules: A comprehensive review

    No full text

    The Norman Transcript

    Get PDF
    Weekly newspaper from Norman, Oklahoma that includes local, state, and national news along with advertising

    Breast Cancer Resistance Protein (BCRP) or ABCG2

    No full text

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc

    Annals, Volume 107 Index

    No full text
    corecore