9 research outputs found

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    Mine Over Matter: Gaming Google 'Quick, Draw!' Data to Explore Theory of Mind in Autism

    Full text link
    Linguistic coordination depends upon some extent of interpretable and resolvable implicit communication as pertinent to any given act delivered or read to carry communicative intent. Implicit communication may bear various forms, from denotative versus connotative phrasal meaning representations to evaluations of holistic or stepwise behaviors (whether among humans or among nodes of a system more broadly) in determination of implicit participation and cooperation. This thesis proceeds in documenting how various communicative channels serve to inform accommodative design receptive to unconventional indicators of participation. It holistically encompasses such explorations through theories of silence and current work in conversational database querying, but the focus in particular is upon research for proof-of-concept game design attending to theory of mind presentations in children with autism spectrum disorder. This project is not only about informing accommodative design, but also about informing the \textit{means} of informing accommodative design; as such, user study and expert feedback sessions not only serve the design process, but the design of the design process. Confronting similar processes and problems whether crafting a working set of linguistic definitions or crafting expectations for free-form querying with small data, the theory-of-mind project is a unifying thread for the whole as a matter of finding, or building, the structures necessary for open-ended communicative liberty. Working to expand technological inclusivity via assumed cooperation, this project and its adjuncts explore how far the limits may be taken of what a system or dataset may offer the user and vice versa for as long as communicative ``benefit of the doubt'' is granted and sustained

    The Merits, Limitations, and Future Directions of Cost-Effectiveness Analysis in Cardiac MRI with a Focus on Coronary Artery Disease: A Literature Review

    No full text
    Cardiac magnetic resonance (CMR) imaging has a wide range of clinical applications with a high degree of accuracy for many myocardial pathologies. Recent literature has shown great utility of CMR in diagnosing many diseases, often changing the course of treatment. Despite this, it is often underutilized possibly due to perceived costs, limiting patient factors and comfort, and longer examination periods compared to other imaging modalities. In this regard, we conducted a literature review using keywords “Cost-Effectiveness” and “Cardiac MRI” and selected articles from the PubMed MEDLINE database that met our inclusion and exclusion criteria to examine the cost-effectiveness of CMR. Our search result yielded 17 articles included in our review. We found that CMR can be cost-effective in quality-adjusted life years (QALYs) in select patient populations with various cardiac pathologies. Specifically, the use of CMR in coronary artery disease (CAD) patients with a pretest probability below a certain threshold may be more cost-effective compared to patients with a higher pretest probability, although its use can be limited based on geographic location, professional society guidelines, and differing reimbursement patterns. In addition, a stepwise combination of different imaging modalities, with conjunction of AHA/ACC guidelines can further enhance the cost-effectiveness of CMR

    Anticipatory feelings: Neural correlates and linguistic markers

    No full text

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    No full text
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License
    corecore