10 research outputs found
Sampling Correctors
In many situations, sample data is obtained from a noisy or imperfect source.
In order to address such corruptions, this paper introduces the concept of a
sampling corrector. Such algorithms use structure that the distribution is
purported to have, in order to allow one to make "on-the-fly" corrections to
samples drawn from probability distributions. These algorithms then act as
filters between the noisy data and the end user.
We show connections between sampling correctors, distribution learning
algorithms, and distribution property testing algorithms. We show that these
connections can be utilized to expand the applicability of known distribution
learning and property testing algorithms as well as to achieve improved
algorithms for those tasks.
As a first step, we show how to design sampling correctors using proper
learning algorithms. We then focus on the question of whether algorithms for
sampling correctors can be more efficient in terms of sample complexity than
learning algorithms for the analogous families of distributions. When
correcting monotonicity, we show that this is indeed the case when also granted
query access to the cumulative distribution function. We also obtain sampling
correctors for monotonicity without this stronger type of access, provided that
the distribution be originally very close to monotone (namely, at a distance
). In addition to that, we consider a restricted error model
that aims at capturing "missing data" corruptions. In this model, we show that
distributions that are close to monotone have sampling correctors that are
significantly more efficient than achievable by the learning approach.
We also consider the question of whether an additional source of independent
random bits is required by sampling correctors to implement the correction
process
Recommended from our members
Maxing, Ranking and Preference Learning
PAC maximum selection (maxing) and ranking of elements via randompairwise comparisons have diverse applications and have been studiedunder many models and assumptions. We consider -PACmaxing and ranking using pairwise comparisons for \nobreak{general}probabilistic models. We present a comprehensive understanding ofthree important problems in PAC preference learning: maxing, ranking,and estimating \emph{all} pairwise preference probabilities, in theadaptive setting.{\bf SST + STI:} We consider -PAC maximum-selectionand ranking using pairwise comparisons for \nobreak{general}probabilistic models whose comparison probabilities satisfy\emph{strong stochastic transitivity (SST)} and \emph{stochastic triangle inequality (STI)}. Modifying the popular knockouttournament, we propose a simple maximum-selection algorithm that uses comparisons, optimal up to a constantfactor. We then derive a general framework that uses noisy binarysearch to speed up many ranking algorithms, and combine it with mergesort to obtain a ranking algorithm that uses \mathcal{O}\left(\fracn{\epsilon^2}\log n(\log \log n)^3\right) comparisons for, optimal up to a factor.{\bf SST +/- STI and Borda:} With just one simple natural assumption:\emph{strong stochastic transitivity (SST)}, we show that maxing canbe performed with linearly many comparisons yet ranking requiresquadratically many. With no assumptions at all, we show that for theBorda-score metric, maximum selection can be performed with linearlymany comparisons and ranking can be performed with \cO(n\log n)comparisons.{\bf General Transitive Models} With just \emph{Weak Stochastic Transitivity (WST)}, we show that maxing requires comparisons and with slightly more restrictive \emph{Medium Stochastic Transitivity (MST)}, we present a linear complexity maxingalgorithm. With \emph{Strong Stochastic Transitivity (SST)} and\emph{Stochastic Triangle Inequality (STI)}, we derive a rankingalgorithm with optimal complexity and anoptimal algorithm that estimates all pairwise preferenceprobabilities.{\bf Sequential and Competitive} We extend the well-known\emph{secretary problem} to a probabilistic setting, and apply theintuition gained to derive the first query-optimal sequentialalgorithm for probabilistic-maxing. Furthermore, departing fromprevious assumptions, the algorithm and performance guarantees applyeven for infinitely many items, hence in particular do not requirea-priori knowledge of the number of items. The algorithm has linearcomplexity, and is optimal also in the streaming setting and for bothtraditional- and dueling-bandits. In a non-streaming setting, amodification of the algorithm is \emph{competitive} in that itrequires essentially the lowest number of queries not just in theworst case, but for every underlying distribution