31 research outputs found
Disagreeing about Crocs and socks: Creating profoundly ambiguous color displays
There is an increasing interest in the systematic disagreement about
profoundly ambiguous stimuli in the color domain. However, this research has
been hobbled by the fact that we could not create such stimuli at will. Here,
we describe a design principle that allows the creation of such stimuli and
apply this principle to create one such stimulus set - the crocs and socks.
Using this set, we probed the color perception of a large sample of observers,
showing that these stimuli are indeed categorically ambiguous and that we can
predict the percept from fabric priors resulting from experience. We also
relate the perception of these crocs to other color-ambiguous stimuli - the
dress and the sneaker and conclude that differential priors likely underlie
polarized disagreement in cognition more generally.Comment: 24 pages, 11 figure
Teaching Computation in Neuroscience: Notes on the 2019 Society for Neuroscience Professional Development Workshop on Teaching
The 2019 Society for Neuroscience Professional 1Development Workshop on Teaching reviewed current tools, approaches, and examples for teaching computation in neuroscience. Robert Kass described the statistical foundations that students need to properly analyze data. Pascal Wallisch compared MATLAB and Python as programming languages for teaching students. Adrienne Fairhall discussed computational methods, training opportunities, and curricular considerations. Walt Babiec provided a view from the trenches on practical aspects of teaching computational neuroscience. Mathew Abrams concluded the session with an overview of resources for teaching and learning computational modeling in neuroscience
Neuromatch Academy: a 3-week, online summer school in computational neuroscience
Neuromatch Academy (https://academy.neuromatch.io; (van Viegen et al., 2021)) was designed as an online summer school to cover the basics of computational neuroscience in three weeks. The materials cover dominant and emerging computational neuroscience tools, how they complement one another, and specifically focus on how they can help us to better understand how the brain functions. An original component of the materials is its focus on modeling choices, i.e. how do we choose the right approach, how do we build models, and how can we evaluate models to determine if they provide real (meaningful) insight. This meta-modeling component of the instructional materials asks what questions can be answered by different techniques, and how to apply them meaningfully to get insight about brain function
Crowdsourcing hypothesis tests: Making transparent how design choices shape research results
To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div