10 research outputs found

    Pilot study exploring if an augmented reality NeedleTrainer device improves novice performance of a simulated central venous catheter insertion on a phantom

    Get PDF
    Introduction: Needle insertion and visualisation skills needed for ultrasound (US)-guided procedures can be challenging to acquire. The novel NeedleTrainer device superimposes a digital holographic needle on a real-time US image display without puncturing a surface. The aim of this randomised control study was to compare the success of trainees performing a simulated central venous catheter insertion on a phantom either with or without prior NeedleTrainer device practice. Methods: West of Scotland junior trainees who had not performed insertion of a central venous catheter were randomised into two groups (n=20). Participants undertook standardized online training through a pre-recorded video and training on how to handle a US probe. Group 1 had 10 minutes of supervised training with the NeedleTrainer device. Group 2 were a control group. Participants were assessed on needle insertion to a pre-defined target vein in a phantom. The outcome measures were the time taken for needle placement (secs), number of needle passes (n), operator confidence (0-10), assessor confidence (0-10), and NASA task load index score. Results: The mean mental demand score in the control group was 7.65 (SD 3.5) compared to 12.8 (SD 2.2, p=0.005) in the NeedleTrainer group. There was no statistical difference between the groups in any of the other outcome measures. Discussion: This was a small pilot study, and small participant numbers may have impacted the statistical significance. There is natural variation of skill within participants that could not have been controlled for. The difference in pressure needed using the NeedleTrainer compared to a real needle may impact the outcome measures

    The traditional boats of Venice : assessing a maritime heritage

    No full text
    This Interdisciplinary Qualifying Project looks at the traditional boats of Venice, Italy. Traditional boats in Venice are becoming endangered and in some cases extinct, because of the inception and growing popularity of the motor boat. Venetians have always had a special relationship with their boats because of Venice's unique environment. Through maps and databases we examine the entire traditional boat community in Venice with an aim of preventing the loss of traditional boats to Venice

    How to achieve maximum charge carrier loading on heteroatom-substituted graphene nanoribbon edges: density functional theory study

    Get PDF
    The practical number of charge carriers loaded is crucial to the evaluation of the capacity performance of carbon-based electrodes in service, and cannot be easily addressed experimentally. In this paper, we report a density functional theory study of charge carrier adsorption onto zigzag edge-shaped graphene nanoribbons (ZGNRs), both pristine and incorporating edge substitution with boron, nitrogen or oxygen atoms. All edge substitutions are found to be energetically favorable, especially in oxidized environments. The maximal loading of protons onto the substituted ZGNR edges obeys a rule of [8-n-1], where n is the number of valence electrons of the edge-site atom constituting the adsorption site. Hence, a maximum charge loading is achieved with boron substitution. This result correlates in a transparent manner with the electronic structure characteristics of the edge atom. The boron edge atom, characterized by the most empty p band, facilitates more than the other substitutional cases the accommodation of valence electrons transferred from the ribbon, induced by adsorption of protons. This result not only further confirms the possibility of enhancing charge storage performance of carbon-based electrochemical devices through chemical functionalization but also, more importantly, provides the physical rationale for further design strategies

    Determinants and Consequences of Corporate Tax Avoidance

    No full text

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore