42 research outputs found
Standardized and reproducible measurement of decision-making in mice.
Progress in science requires standardized assays whose results can be readily shared, compared, and reproduced across laboratories. Reproducibility, however, has been a concern in neuroscience, particularly for measurements of mouse behavior. Here, we show that a standardized task to probe decision-making in mice produces reproducible results across multiple laboratories. We adopted a task for head-fixed mice that assays perceptual and value-based decision making, and we standardized training protocol and experimental hardware, software, and procedures. We trained 140 mice across seven laboratories in three countries, and we collected 5 million mouse choices into a publicly available database. Learning speed was variable across mice and laboratories, but once training was complete there were no significant differences in behavior across laboratories. Mice in different laboratories adopted similar reliance on visual stimuli, on past successes and failures, and on estimates of stimulus prior probability to guide their choices. These results reveal that a complex mouse behavior can be reproduced across multiple laboratories. They establish a standard for reproducible rodent behavior, and provide an unprecedented dataset and open-access tools to study decision-making in mice. More generally, they indicate a path toward achieving reproducibility in neuroscience through collaborative open-science approaches
Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders
Action Contro
Standardized and reproducible measurement of decision-making in mice
Progress in science requires standardized assays whose results can be readily shared, compared, and reproduced across laboratories. Reproducibility, however, has been a concern in neuroscience, particularly for measurements of mouse behavior. Here we show that a standardized task to probe decision-making in mice produces reproducible results across multiple laboratories. We designed a task for head-fixed mice that combines established assays of perceptual and value-based decision making, and we standardized training protocol and experimental hardware, software, and procedures. We trained 140 mice across seven laboratories in three countries, and we collected 5 million mouse choices into a publicly available database. Learning speed was variable across mice and laboratories, but once training was complete there were no significant differences in behavior across laboratories. Mice in different laboratories adopted similar reliance on visual stimuli, on past successes and failures, and on estimates of stimulus prior probability to guide their choices. These results reveal that a complex mouse behavior can be successfully reproduced across multiple laboratories. They establish a standard for reproducible rodent behavior, and provide an unprecedented dataset and open-access tools to study decision-making in mice. More generally, they indicate a path towards achieving reproducibility in neuroscience through collaborative open-science approaches
A standardized and reproducible method to measure decision-making in mice.
Abstract Progress in neuroscience is hindered by poor reproducibility of mouse behavior. Here we show that in a visual decision making task, reproducibility can be achieved by automating the training protocol and by standardizing experimental hardware, software, and procedures. We trained 101 mice in this task across seven laboratories at six different research institutions in three countries, and obtained 3 million mouse choices. In trained mice, variability in behavior between labs was indistinguishable from variability within labs. Psychometric curves showed no significant differences in visual threshold, bias, or lapse rates across labs. Moreover, mice across laboratories adopted similar strategies when stimulus location had asymmetrical probability that changed over time. We provide detailed instructions and open-source tools to set up and implement our method in other laboratories. These results establish a new standard for reproducibility of rodent behavior and provide accessible tools for the study of decision making in mice
Knowledge across networks: how to build a global neuroscience collaboration
The International Brain Laboratory (IBL) is a collaboration of ~20 laboratories dedicated to developing a standardized mouse decision-making behavior, coordinating measurements of neural activity across the mouse brain, and utilizing theoretical approaches to formalize the neural computations that support decision-making. In contrast to traditional neuroscientific practice, in which individual laboratories each probe different behaviors and record from a few select brain areas, IBL aims to deliver a standardized, high-density approach to behavioral and neural assays. This approach relies on a highly distributed, collaborative network of ~50 researchers—postdocs, graduate students, and scientific staff—who coordinate the intellectual, administrative, and sociological aspects of the project. In this article, we examine this network, extract some lessons learned, and consider how IBL may represent a template for other team-based approaches in neuroscience, and beyond
Understanding Learning Trajectories With Infinite Hidden Markov Models
Learning the contingencies of a complex experiment is not easy. Individuals learn in an idiosyncratic manner, revising their strategies multiple times as they are shaped, or shape themselves. They may even end up with different asymptotic strategies. This long-run learning is therefore a tantalizing target for the sort of quantitatively individualized characterization that descriptive models can provide. However, any such model requires a flexible and extensible structure which can capture the rapid introduction of radically new behaviours as well as slow changes in existing ones. We suggest a dynamic input-output infinite hidden semi-Markov model whose latent states are associated with specific behavioural patterns. This model encompasses a countably infinite number of potential states, and so can capture new behaviours by introducing states; equally, dynamical evolution of the behavioural pattern specified by a single state allows tracking of slow adaptations in existing behaviours. We fit this model to around 10,000 trials per mouse as they learned to perform a contrast detection task over multiple stages. We quantify different stages of learning via the number and psychometric characteristics of behavioural states, providing comprehensive insight into the highly individualised learning trajectories of animals
Fear and anxiety influences on probabilistic learning: A pilot online study and computational modeling
Learning the contingencies of a complex experiment is no easy task for animals. Individuals learn in an idiosyncratic manner, revising their strategies multiple times as they are shaped, or shape themselves, and potentially ending up with different asymptotic strategies. This long-run learning is therefore a tantalizing target for the sort of quantitatively individualized characterization that sophisticated modelling can provide. However, any such model requires a flexible and extensible structure which can capture radically new behaviours as well as slow changes in existing ones. To this end, we suggest a dynamic input-output infinite hidden Markov model whose latent states are associated with specific behavioural patterns. This model includes a countably infinite number of potential states and so has the capacity for describing new behaviour by introducing states, while the dynamics in the model allow it to capture adaptations to existing behaviours. We fit this to data collected from mice as they learn a contrast detection task over multiple stages and around ten thousand trials each. We quantify different stages of learning via the number and psychometric characteristics of behavioural states. Our approach provides in-depth insight into the process of animal learning and offers potentially valuable predictors for analyzing neural data