6 research outputs found

    Information-theoretic investigation of multi-unit activity properties under different stimulus conditions in mouse primary visual cortex

    Get PDF
    Primary visual cortex (V1) is the first cortical processing level receiving topographically mapped inputs from the retina, relayed through thalamus. Electrophysiological studies discovered its important role in early sensory processing particularly in edge detection in single cells. To this end, little is investigated how these activities relate on a population level. Orientation tuning in mouse V1 has long been reported as salt-and pepper organised, lacking apparent structure as was found in e.g. cat or primates. This is a novel synthesis of specially designed in-vivo electrophysiological experiments aiming to make certain information-theoretic data analysis approaches viable. Sophisticated state-of-the-art data analysis techniques are applied to answer questions about stimulus information in mouse V1. Multi-unit electrophysiological experiments were devised, performed and evaluated in the anaesthetised and in left hemisphere V1 of the awake behaving, head-fixed mouse. A detailed laboratory and computational analysis is presented validating the use of Multi-Unit-Activity (MUA) and information-theoretic measures. Our results indicate left forward drifting gratings (moving from the temporal to nasal visual field) elicit consistently highest neuronal responses across cortical layers and columns, challenging the common understanding of random organisation. These directional biasses of MUA were also observable on the population level. In addition to individual multi-unit analyses, population responses in terms of binary word distributions appear more similar between spontaneous activity and responses to natural movies than either/both to moving gratings, suggesting that mouse V1 processes natural scenes differently from sinusoidal drifting gratings. Response pattern distributions for different gratings emerge to be spatially but not orientationally clustered. Further computational analysis suggests population firing rates can partially account for these differences. Electrophysiological experiments in the awake behaving mouse indicate V1 to contain information about behavioural outcome in a GO/NOGO task. This, along with other statistical measures is examined with statistical models such as the population tracking model, which suggest that population interactions are required to explain these observations.Open Acces

    Bibliography

    No full text

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore