17,663 research outputs found
Minimizing Statistical Bias with Queries
I describe an exploration criterion that attempts to minimize the error of a learner by minimizing its estimated squared bias. I describe experiments with locally-weighted regression on two simple kinematics problems, and observe that this "bias-only" approach outperforms the more common "variance-only" exploration approach, even in the presence of noise
Active Learning with Statistical Models
For many types of machine learning algorithms, one can compute the
statistically `optimal' way to select training data. In this paper, we review
how optimal data selection techniques have been used with feedforward neural
networks. We then show how the same principles may be used to select data for
two alternative, statistically-based learning architectures: mixtures of
Gaussians and locally weighted regression. While the techniques for neural
networks are computationally expensive and approximate, the techniques for
mixtures of Gaussians and locally weighted regression are both efficient and
accurate. Empirically, we observe that the optimality criterion sharply
decreases the number of training examples the learner needs in order to achieve
good performance.Comment: See http://www.jair.org/ for any accompanying file
Unbiased Comparative Evaluation of Ranking Functions
Eliciting relevance judgments for ranking evaluation is labor-intensive and
costly, motivating careful selection of which documents to judge. Unlike
traditional approaches that make this selection deterministically,
probabilistic sampling has shown intriguing promise since it enables the design
of estimators that are provably unbiased even when reusing data with missing
judgments. In this paper, we first unify and extend these sampling approaches
by viewing the evaluation problem as a Monte Carlo estimation task that applies
to a large number of common IR metrics. Drawing on the theoretical clarity that
this view offers, we tackle three practical evaluation scenarios: comparing two
systems, comparing systems against a baseline, and ranking systems. For
each scenario, we derive an estimator and a variance-optimizing sampling
distribution while retaining the strengths of sampling-based evaluation,
including unbiasedness, reusability despite missing data, and ease of use in
practice. In addition to the theoretical contribution, we empirically evaluate
our methods against previously used sampling heuristics and find that they
generally cut the number of required relevance judgments at least in half.Comment: Under review; 10 page
Optimum Statistical Estimation with Strategic Data Sources
We propose an optimum mechanism for providing monetary incentives to the data
sources of a statistical estimator such as linear regression, so that high
quality data is provided at low cost, in the sense that the sum of payments and
estimation error is minimized. The mechanism applies to a broad range of
estimators, including linear and polynomial regression, kernel regression, and,
under some additional assumptions, ridge regression. It also generalizes to
several objectives, including minimizing estimation error subject to budget
constraints. Besides our concrete results for regression problems, we
contribute a mechanism design framework through which to design and analyze
statistical estimators whose examples are supplied by workers with cost for
labeling said examples
On Measuring Bias in Online Information
Bias in online information has recently become a pressing issue, with search
engines, social networks and recommendation services being accused of
exhibiting some form of bias. In this vision paper, we make the case for a
systematic approach towards measuring bias. To this end, we discuss formal
measures for quantifying the various types of bias, we outline the system
components necessary for realizing them, and we highlight the related research
challenges and open problems.Comment: 6 pages, 1 figur
- …