240,065 research outputs found
Communication and rational responsiveness to the world
Donald Davidson has long maintained that in order to be credited with the concept of objectivity – and, so, with language and thought – it is necessary to communicate with at least one other speaker. I here examine Davidson’s central argument for this thesis and argue that it is unsuccessful. Subsequently, I turn to Robert Brandom’s defense of the thesis in Making It Explicit. I argue that, contrary to Brandom, in order to possess the concept of objectivity it is not necessary to engage in the practice of interpersonal reasoning because possession of the concept is independently integral to the practice of intrapersonal reasoning
Unknown Quantum States and Operations, a Bayesian View
The classical de Finetti theorem provides an operational definition of the
concept of an unknown probability in Bayesian probability theory, where
probabilities are taken to be degrees of belief instead of objective states of
nature. In this paper, we motivate and review two results that generalize de
Finetti's theorem to the quantum mechanical setting: Namely a de Finetti
theorem for quantum states and a de Finetti theorem for quantum operations. The
quantum-state theorem, in a closely analogous fashion to the original de
Finetti theorem, deals with exchangeable density-operator assignments and
provides an operational definition of the concept of an "unknown quantum state"
in quantum-state tomography. Similarly, the quantum-operation theorem gives an
operational definition of an "unknown quantum operation" in quantum-process
tomography. These results are especially important for a Bayesian
interpretation of quantum mechanics, where quantum states and (at least some)
quantum operations are taken to be states of belief rather than states of
nature.Comment: 37 pages, 3 figures, to appear in "Quantum Estimation Theory," edited
by M.G.A. Paris and J. Rehacek (Springer-Verlag, Berlin, 2004
Random sets and exact confidence regions
An important problem in statistics is the construction of confidence regions
for unknown parameters. In most cases, asymptotic distribution theory is used
to construct confidence regions, so any coverage probability claims only hold
approximately, for large samples. This paper describes a new approach, using
random sets, which allows users to construct exact confidence regions without
appeal to asymptotic theory. In particular, if the user-specified random set
satisfies a certain validity property, confidence regions obtained by
thresholding the induced data-dependent plausibility function are shown to have
the desired coverage probability.Comment: 14 pages, 2 figure
A Theoretical Analysis of Two-Stage Recommendation for Cold-Start Collaborative Filtering
In this paper, we present a theoretical framework for tackling the cold-start
collaborative filtering problem, where unknown targets (items or users) keep
coming to the system, and there is a limited number of resources (users or
items) that can be allocated and related to them. The solution requires a
trade-off between exploitation and exploration as with the limited
recommendation opportunities, we need to, on one hand, allocate the most
relevant resources right away, but, on the other hand, it is also necessary to
allocate resources that are useful for learning the target's properties in
order to recommend more relevant ones in the future. In this paper, we study a
simple two-stage recommendation combining a sequential and a batch solution
together. We first model the problem with the partially observable Markov
decision process (POMDP) and provide an exact solution. Then, through an
in-depth analysis over the POMDP value iteration solution, we identify that an
exact solution can be abstracted as selecting resources that are not only
highly relevant to the target according to the initial-stage information, but
also highly correlated, either positively or negatively, with other potential
resources for the next stage. With this finding, we propose an approximate
solution to ease the intractability of the exact solution. Our initial results
on synthetic data and the Movie Lens 100K dataset confirm the performance gains
of our theoretical development and analysis
Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge
We study minimal single-task peer prediction mechanisms that have limited
knowledge about agents' beliefs. Without knowing what agents' beliefs are or
eliciting additional information, it is not possible to design a truthful
mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore
equilibrium strategy profiles that are only partially truthful. Using the
results from the multi-armed bandit literature, we give a characterization of
how inefficient these equilibria are comparing to truthful reporting. We
measure the inefficiency of such strategies by counting the number of dishonest
reports that any minimal knowledge-bounded mechanism must have. We show that
the order of this number is , where is the number of
agents, and we provide a peer prediction mechanism that achieves this bound in
expectation
- …