23,298 research outputs found
Learning Multi-item Auctions with (or without) Samples
We provide algorithms that learn simple auctions whose revenue is
approximately optimal in multi-item multi-bidder settings, for a wide range of
valuations including unit-demand, additive, constrained additive, XOS, and
subadditive. We obtain our learning results in two settings. The first is the
commonly studied setting where sample access to the bidders' distributions over
valuations is given, for both regular distributions and arbitrary distributions
with bounded support. Our algorithms require polynomially many samples in the
number of items and bidders. The second is a more general max-min learning
setting that we introduce, where we are given "approximate distributions," and
we seek to compute an auction whose revenue is approximately optimal
simultaneously for all "true distributions" that are close to the given ones.
These results are more general in that they imply the sample-based results, and
are also applicable in settings where we have no sample access to the
underlying distributions but have estimated them indirectly via market research
or by observation of previously run, potentially non-truthful auctions.
Our results hold for valuation distributions satisfying the standard (and
necessary) independence-across-items property. They also generalize and improve
upon recent works, which have provided algorithms that learn approximately
optimal auctions in more restricted settings with additive, subadditive and
unit-demand valuations using sample access to distributions. We generalize
these results to the complete unit-demand, additive, and XOS setting, to i.i.d.
subadditive bidders, and to the max-min setting.
Our results are enabled by new uniform convergence bounds for hypotheses
classes under product measures. Our bounds result in exponential savings in
sample complexity compared to bounds derived by bounding the VC dimension, and
are of independent interest.Comment: Appears in FOCS 201
Nonparametric Independence Screening in Sparse Ultra-High Dimensional Additive Models
A variable screening procedure via correlation learning was proposed Fan and
Lv (2008) to reduce dimensionality in sparse ultra-high dimensional models.
Even when the true model is linear, the marginal regression can be highly
nonlinear. To address this issue, we further extend the correlation learning to
marginal nonparametric learning. Our nonparametric independence screening is
called NIS, a specific member of the sure independence screening. Several
closely related variable screening procedures are proposed. Under the
nonparametric additive models, it is shown that under some mild technical
conditions, the proposed independence screening methods enjoy a sure screening
property. The extent to which the dimensionality can be reduced by independence
screening is also explicitly quantified. As a methodological extension, an
iterative nonparametric independence screening (INIS) is also proposed to
enhance the finite sample performance for fitting sparse additive models. The
simulation results and a real data analysis demonstrate that the proposed
procedure works well with moderate sample size and large dimension and performs
better than competing methods.Comment: 48 page
Consistency of Causal Inference under the Additive Noise Model
We analyze a family of methods for statistical causal inference from sample
under the so-called Additive Noise Model. While most work on the subject has
concentrated on establishing the soundness of the Additive Noise Model, the
statistical consistency of the resulting inference methods has received little
attention. We derive general conditions under which the given family of
inference methods consistently infers the causal direction in a nonparametric
setting
Penalized Likelihood and Bayesian Function Selection in Regression Models
Challenging research in various fields has driven a wide range of
methodological advances in variable selection for regression models with
high-dimensional predictors. In comparison, selection of nonlinear functions in
models with additive predictors has been considered only more recently. Several
competing suggestions have been developed at about the same time and often do
not refer to each other. This article provides a state-of-the-art review on
function selection, focusing on penalized likelihood and Bayesian concepts,
relating various approaches to each other in a unified framework. In an
empirical comparison, also including boosting, we evaluate several methods
through applications to simulated and real data, thereby providing some
guidance on their performance in practice
- …