5 research outputs found
Recommended from our members
Measuring category intuitiveness in unconstrained categorization tasks
What makes a category seem natural or intuitive? In this paper, an unsupervised categorization task was employed to examine observer agreement concerning the categorization of nine different stimulus sets. The stimulus sets were designed to capture different intuitions about classification structure. The main empirical index of category intuitiveness was the frequency of the preferred classification, for different stimulus sets. With 169 participants, and a within participants design, with some stimulus sets the most frequent classification was produced over 50 times and with others not more than two or three times. The main empirical finding was that cluster tightness was more important in determining category intuitiveness, than cluster separation. The results were considered in relation to the following models of unsupervised categorization: DIVA, the rational model, the simplicity model, SUSTAIN, an Unsupervised version of the Generalized Context Model (UGCM), and a simple geometric model based on similarity. DIVA, the geometric approach, SUSTAIN, and the UGCM provided good, though not perfect, fits. Overall, the present work highlights several theoretical and practical issues regarding unsupervised categorization and reveals weaknesses in some of the corresponding formal models
Recommended from our members
Supervised versus unsupervised categorization: Two sides of the same coin?
Supervised and unsupervised categorization have been studied in separate research traditions. A handful of studies have attempted to explore a possible convergence between the two. The present research builds on these studies, by comparing the unsupervised categorization results of Pothos et al. (submitted; 2008) with the results from two procedures of supervised categorization. In two experiments, we tested 375 participants with nine different stimulus sets, and examined the relation between ease of learning of a classification, memory for a classification, and spontaneous preference for a classification. After taking into account the role of the number of category labels (clusters) in supervised learning, we found the three variables to be closely associated with each other. Our results provide encouragement for researchers seeking unified theoretical explanations for supervised and unsupervised categorization, but raise a range of challenging theoretical questions
EFFECT OF COGNITIVE BIASES ON HUMAN UNDERSTANDING OF RULE-BASED MACHINE LEARNING MODELS
PhDThis thesis investigates to what extent do cognitive biases a ect human understanding of
interpretable machine learning models, in particular of rules discovered from data. Twenty
cognitive biases (illusions, e ects) are analysed in detail, including identi cation of possibly
e ective debiasing techniques that can be adopted by designers of machine learning algorithms
and software. This qualitative research is complemented by multiple experiments
aimed to verify, whether, and to what extent, do selected cognitive biases in uence human
understanding of actual rule learning results. Two experiments were performed, one
focused on eliciting plausibility judgments for pairs of inductively learned rules, second
experiment involved replication of the Linda experiment with crowdsourcing and two of
its modi cations. Altogether nearly 3.000 human judgments were collected. We obtained
empirical evidence for the insensitivity to sample size e ect. There is also limited evidence
for the disjunction fallacy, misunderstanding of and , weak evidence e ect and availability
heuristic.
While there seems no universal approach for eliminating all the identi ed cognitive biases,
it follows from our analysis that the e ect of many biases can be ameliorated by
making rule-based models more concise. To this end, in the second part of thesis we propose
a novel machine learning framework which postprocesses rules on the output of the
seminal association rule classi cation algorithm CBA [Liu et al, 1998]. The framework
uses original undiscretized numerical attributes to optimize the discovered association
rules, re ning the boundaries of literals in the antecedent of the rules produced by CBA.
Some rules as well as literals from the rules can consequently be removed, which makes the
resulting classi er smaller. Benchmark of our approach on 22 UCI datasets shows average
53% decrease in the total size of the model as measured by the total number of conditions
in all rules. Model accuracy remains on the same level as for CBA
An Algorithmic Interpretation of Quantum Probability
The Everett (or relative-state, or many-worlds) interpretation of quantum mechanics has come under fire for inadequately dealing with the Born rule (the formula for calculating quantum probabilities). Numerous attempts have been made to derive this rule from the perspective of observers within the quantum wavefunction. These are not really analytic proofs, but are rather attempts to derive the Born rule as a synthetic a priori necessity, given the nature of human observers (a fact not fully appreciated even by all of those who have attempted such proofs). I show why existing attempts are unsuccessful or only partly successful, and postulate that Solomonoff's algorithmic approach to the interpretation of probability theory could clarify the problems with these approaches. The Sleeping Beauty probability puzzle is used as a springboard from which to deduce an objectivist, yet synthetic a priori framework for quantum probabilities, that properly frames the role of self-location and self-selection (anthropic) principles in probability theory. I call this framework "algorithmic synthetic unity" (or ASU). I offer no new formal proof of the Born rule, largely because I feel that existing proofs (particularly that of Gleason) are already adequate, and as close to being a formal proof as one should expect or want. Gleason's one unjustified assumption--known as noncontextuality--is, I will argue, completely benign when considered within the algorithmic framework that I propose. I will also argue that, to the extent the Born rule can be derived within ASU, there is no reason to suppose that we could not also derive all the other fundamental postulates of quantum theory, as well. There is nothing special here about the Born rule, and I suggest that a completely successful Born rule proof might only be possible once all the other postulates become part of the derivation. As a start towards this end, I show how we can already derive the essential content of the fundamental postulates of quantum mechanics, at least in outline, and especially if we allow some educated and well-motivated guesswork along the way. The result is some steps towards a coherent and consistent algorithmic interpretation of quantum mechanics