45,831 research outputs found
Inferring choice criteria with mixture IRT models: A demonstration using ad hoc and goal-derived categories
Whether it pertains to the foods to buy when one is on a diet, the items to take along to the beach on one’s day off or
(perish the thought) the belongings to save from one’s burning house, choice is ubiquitous. We aim to determine from
choices the criteria individuals use when they select objects from among a set of candidates. In order to do so we employ
a mixture IRT (item-response theory) model that capitalizes on the insights that objects are chosen more often the better
they meet the choice criteria and that the use of different criteria is reflected in inter-individual selection differences. The
model is found to account for the inter-individual selection differences for 10 ad hoc and goal-derived categories. Its
parameters can be related to selection criteria that are frequently thought of in the context of these categories. These
results suggest that mixture IRT models allow one to infer from mere choice behavior the criteria individuals used to
select/discard objects. Potential applications of mixture IRT models in other judgment and decision making contexts are
discussed
Recommended from our members
Incremental learning of independent, overlapping, and graded concept descriptions with an instance-based process framework
Supervised learning algorithms make several simplifying assumptions concerning the characteristics of the concept descriptions to be learned. For example, concepts are often assumed to be (1) defined with respect to the same set of relevant attributes, (2) disjoint in instance space, and (3) have uniform instance distributions. While these assumptions constrain the learning task, they unfortunately limit an algorithm's applicability. We believe that supervised learning algorithms should learn attribute relevancies independently for each concept, allow instances to be members of any subset of concepts, and represent graded concept descriptions. This paper introduces a process framework for instance-based learning algorithms that exploit only specific instance and performance feedback information to guide their concept learning processes. We also introduce Bloom, a specific instantiation of this framework. Bloom is a supervised, incremental, instance-based learning algorithm that learns relative attribute relevancies independently for each concept, allows instances to be members of any subset of concepts, and represents graded concept memberships. We describe empirical evidence to support our claims that Bloom can learn independent, overlapping, and graded concept descriptions
Toward Optimal Run Racing: Application to Deep Learning Calibration
This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter
Towards Better Separation between Deterministic and Randomized Query Complexity
We show that there exists a Boolean function which observes the following
separations among deterministic query complexity , randomized zero
error query complexity and randomized one-sided error query
complexity : and
. This refutes the conjecture made by Saks
and Wigderson that for any Boolean function ,
. This also shows widest separation between
and for any Boolean function. The function was defined by
G{\"{o}}{\"{o}}s, Pitassi and Watson who studied it for showing a separation
between deterministic decision tree complexity and unambiguous
non-deterministic decision tree complexity. Independently of us, Ambainis et al
proved that different variants of the function certify optimal (quadratic)
separation between and , and polynomial separation between
and . Viewed as separation results, our results are subsumed
by those of Ambainis et al. However, while the functions considerd in the work
of Ambainis et al are different variants of , we work with the original
function itself.Comment: Reference adde
Don't Believe Everything You Hear : Preserving Relevant Information by Discarding Social Information
Integrating information gained by observing others via Social Bayesian Learning can be beneficial for an agent’s performance, but can also enable population wide information cascades that perpetuate false beliefs through the agent population. We show how agents can influence the observation network by changing their probability of observing others, and demonstrate the existence of a population-wide equilibrium, where the advantages and disadvantages of the Social Bayesian update are balanced. We also use the formalism of relevant information to illustrate how negative information cascades are characterized by processing increasing amounts of non-relevant informatio
Subset selection in dimension reduction methods
Dimension reduction methods play an important role in multivariate statistical analysis, in particular with high-dimensional data. Linear methods can be seen as a linear mapping from the original feature space to a dimension reduction subspace. The aim is to transform the data so that the essential structure is more easily understood. However, highly correlated variables provide redundant information, whereas some other feature may be irrelevant, and we would like to identify and then discard both of them while pursuing dimension reduction. Here we propose a greedy search algorithm, which avoids the search over all possible subsets, for ranking subsets of variables based on their ability to explain variation in the dimension reduction variates.Dimension reduction methods, Linear mapping, Subset selection, Greedy search
- …