1,215 research outputs found
Cognitive Models as Bridge between Brain and Behavior
How can disparate neural and behavioral measures be integrated? Turner and colleagues propose joint modeling as a solution. Joint modeling mutually constrains the interpretation of brain and behavioral measures by exploiting their covariation structure. Simultaneous estimation allows for more accurate prediction than would be possible by considering these measures in isolation
Levels of biological plausibility
Notions of mechanism, emergence, reduction and explanation are all tied to levels of analysis. I cover the relationship between lower and higher levels, suggest a level of mechanism approach for neuroscience in which the components of a mechanism can themselves be further decomposed and argue that scientists' goals are best realized by focusing on pragmatic concerns rather than on metaphysical claims about what is 'real'. Inexplicably, neuroscientists are enchanted by both reduction and emergence. A fascination with reduction is misplaced given that theory is neither sufficiently developed nor formal to allow it, whereas metaphysical claims of emergence bring physicalism into question. Moreover, neuroscience's existence as a discipline is owed to higher-level concepts that prove useful in practice. Claims of biological plausibility are shown to be incoherent from a level of mechanism view and more generally are vacuous. Instead, the relevant findings to address should be specified so that model selection procedures can adjudicate between competing accounts. Model selection can help reduce theoretical confusions and direct empirical investigations. Although measures themselves, such as behaviour, blood-oxygen-level-dependent (BOLD) and single-unit recordings, are not levels of analysis, like levels, no measure is fundamental and understanding how measures relate can hasten scientific progress. This article is part of the theme issue 'Key relationships between non-invasive functional neuroimaging and the underlying neuronal activity'
Model-based fMRI analysis of memory
Recent advances in Model-based fMRI approaches enable researchers to investigate hypotheses about the time course and latent structure in data that were previously inaccessible. Cognitive models, especially when validated on multiple datasets, allow for additional constraints to be marshalled when interpreting neuroimaging data. Models can be related to BOLD response in a variety of ways, such as constraining the cognitive model by neural data, interpreting the neural data in light of behavioural fit, or simultaneously accounting for both neural and behavioural data. Using cognitive models as a lens on fMRI data is complementary to popular multivariate decoding and representational similarity analysis approaches. Indeed, these approaches can realise greater theoretical significance when situated within a model-based approach
Bidirectional Influences of Information Sampling and Concept Learning
Contemporary models of categorization typically tend to sidestep the problem of how information is initially encoded during decision making. Instead, a focus of this work has been to investigate how, through selective attention, stimulus representations are “contorted” such that behaviorally relevant dimensions are accentuated (or “stretched”), and the representations of irrelevant dimensions are ignored (or “compressed”). In high-dimensional real-world environments, it is computationally infeasible to sample all available information, and human decision makers selectively sample information from sources expected to provide relevant information. To address these and other shortcomings, we develop an active sampling model, Sampling Emergent Attention (SEA), which sequentially and strategically samples information sources until the expected cost of information exceeds the expected benefit. The model specifies the interplay of two components, one involved in determining the expected utility of different information sources and the other in representing knowledge and beliefs about the environment. These two components interact such that knowledge of the world guides information sampling, and what is sampled updates knowledge. Like human decision makers, the model displays strategic sampling behavior, such as terminating information search when sufficient information has been sampled and adaptively adjusting the search path in response to previously sampled information. The model also shows human-like failure modes. For example, when information exploitation is prioritized over exploration, the bidirectional influences between information sampling and learning can lead to the development of beliefs that systematically differ from reality
How decisions and the desire for coherency shape subjective preferences over time
Recent findings suggest a bidirectional relationship between preferences and choices such that what is chosen can become preferred. Yet, it is still commonly held that preferences for individual items are maintained, such as caching a separate value estimate for each experienced option. Instead, we propose that all possible choice options and preferences are represented in a shared, continuous, multidimensional space that supports generalization. Decision making is cast as a learning process that seeks to align choices and preferences to maintain coherency. We formalized an error-driven learning model that updates preferences to align with past choices, which makes repeating those and related choices more likely in the future. The model correctly predicts that making a free choice increases preferences along related attributes. For example, after choosing a political candidate based on trivial information (e.g., they like cats), voters' views on abortion, immigration, and trade subsequently shifted to match their chosen candidate
Similarity as a Window on the Dimensions of Object Representation
Hebart et al. recently analysed 1.5 million human similarity judgments and found that natural objects are described by a small set of interpretable dimensions. Such large-scale analyses offer new opportunities to characterise how people represent their knowledge, but also challenges, including scaling to even larger data sets and integrating accounts of semantic representation
A non-spatial account of place and grid cells based on clustering models of concept learning
One view is that conceptual knowledge is organized using the circuitry in the medial temporal lobe (MTL) that supports spatial processing and navigation. In contrast, we find that a domain-general learning algorithm explains key findings in both spatial and conceptual domains. When the clustering model is applied to spatial navigation tasks, so-called place and grid cell-like representations emerge because of the relatively uniform distribution of possible inputs in these tasks. The same mechanism applied to conceptual tasks, where the overall space can be higher-dimensional and sampling sparser, leading to representations more aligned with human conceptual knowledge. Although the types of memory supported by the MTL are superficially dissimilar, the information processing steps appear shared. Our account suggests that the MTL uses a general-purpose algorithm to learn and organize context-relevant information in a useful format, rather than relying on navigation-specific neural circuitry
What the Success of Brain Imaging Implies about the Neural Code
The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite limitations in what fMRI measures, implies that certain neural coding schemes are more likely than others. For fMRI to be successful given its low temporal and spatial resolution, the neural code must be smooth at the sub-voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we evaluate a number of reasonable coding schemes and demonstrate that only a subset are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of neural code and ventral stream, as well as what can be successfully investigated with fMRI
Medial prefrontal cortex compresses concept representations through learning
Prefrontal cortex (PFC) is thought to support the ability to focus on goal-relevant information by filtering out irrelevant information, a process akin to dimensionality reduction. Here, we find direct evidence of goal-directed data compression within medial PFC during learning, such that the degree of neural compression predicts an individual's ability to selectively attend to concept-specific information. These findings suggest a domaingeneral mechanism of learning through compression in mPFC
- …