23,226 research outputs found
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification
We present the Bayesian Case Model (BCM), a general framework for Bayesian
case-based reasoning (CBR) and prototype classification and clustering. BCM
brings the intuitive power of CBR to a Bayesian generative framework. The BCM
learns prototypes, the "quintessential" observations that best represent
clusters in a dataset, by performing joint inference on cluster labels,
prototypes and important features. Simultaneously, BCM pursues sparsity by
learning subspaces, the sets of features that play important roles in the
characterization of the prototypes. The prototype and subspace representation
provides quantitative benefits in interpretability while preserving
classification accuracy. Human subject experiments verify statistically
significant improvements to participants' understanding when using explanations
produced by BCM, compared to those given by prior art.Comment: Published in Neural Information Processing Systems (NIPS) 2014,
Neural Information Processing Systems (NIPS) 201
Recommended from our members
On the adequacy of current empirical evaluations of formal models of categorization
Categorization is one of the fundamental building blocks of cognition, and the study of categorization is notable for the extent to which formal modeling has been a central and influential component of research. However, the field has seen a proliferation of noncomplementary models with little consensus on the relative adequacy of these accounts. Progress in assessing the relative adequacy of formal categorization models has, to date, been limited because (a) formal model comparisons are narrow in the number of models and phenomena considered and (b) models do not often clearly define their explanatory scope. Progress is further hampered by the practice of fitting models with arbitrarily variable parameters to each data set independently. Reviewing examples of good practice in the literature, we conclude that model comparisons are most fruitful when relative adequacy is assessed by comparing well-defined models on the basis of the number and proportion of irreversible, ordinal, penetrable successes (principles of minimal flexibility, breadth, good-enough precision, maximal simplicity, and psychological focus)
An introduction to time-resolved decoding analysis for M/EEG
The human brain is constantly processing and integrating information in order
to make decisions and interact with the world, for tasks from recognizing a
familiar face to playing a game of tennis. These complex cognitive processes
require communication between large populations of neurons. The non-invasive
neuroimaging methods of electroencephalography (EEG) and magnetoencephalography
(MEG) provide population measures of neural activity with millisecond precision
that allow us to study the temporal dynamics of cognitive processes. However,
multi-sensor M/EEG data is inherently high dimensional, making it difficult to
parse important signal from noise. Multivariate pattern analysis (MVPA) or
"decoding" methods offer vast potential for understanding high-dimensional
M/EEG neural data. MVPA can be used to distinguish between different conditions
and map the time courses of various neural processes, from basic sensory
processing to high-level cognitive processes. In this chapter, we discuss the
practical aspects of performing decoding analyses on M/EEG data as well as the
limitations of the method, and then we discuss some applications for
understanding representational dynamics in the human brain
- …