352 research outputs found
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function within -error over
the uniform distribution: 1. If is submodular, then it is -close
to a function of variables.
This is an exponential improvement over previously known results. We note that
variables are necessary even for linear
functions. 2. If is fractionally subadditive (XOS) it is -close
to a function of variables. This result holds for all
functions with low total -influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time . For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative factor approximation with
probability at least over the target distribution. Our uniform
distribution algorithm runs in time .
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201
Assortment optimisation under a general discrete choice model: A tight analysis of revenue-ordered assortments
The assortment problem in revenue management is the problem of deciding which
subset of products to offer to consumers in order to maximise revenue. A simple
and natural strategy is to select the best assortment out of all those that are
constructed by fixing a threshold revenue and then choosing all products
with revenue at least . This is known as the revenue-ordered assortments
strategy. In this paper we study the approximation guarantees provided by
revenue-ordered assortments when customers are rational in the following sense:
the probability of selecting a specific product from the set being offered
cannot increase if the set is enlarged. This rationality assumption, known as
regularity, is satisfied by almost all discrete choice models considered in the
revenue management and choice theory literature, and in particular by random
utility models. The bounds we obtain are tight and improve on recent results in
that direction, such as for the Mixed Multinomial Logit model by
Rusmevichientong et al. (2014). An appealing feature of our analysis is its
simplicity, as it relies only on the regularity condition.
We also draw a connection between assortment optimisation and two pricing
problems called unit demand envy-free pricing and Stackelberg minimum spanning
tree: These problems can be restated as assortment problems under discrete
choice models satisfying the regularity condition, and moreover revenue-ordered
assortments correspond then to the well-studied uniform pricing heuristic. When
specialised to that setting, the general bounds we establish for
revenue-ordered assortments match and unify the best known results on uniform
pricing.Comment: Minor changes following referees' comment
List Decoding Tensor Products and Interleaved Codes
We design the first efficient algorithms and prove new combinatorial bounds
for list decoding tensor products of codes and interleaved codes. We show that
for {\em every} code, the ratio of its list decoding radius to its minimum
distance stays unchanged under the tensor product operation (rather than
squaring, as one might expect). This gives the first efficient list decoders
and new combinatorial bounds for some natural codes including multivariate
polynomials where the degree in each variable is bounded. We show that for {\em
every} code, its list decoding radius remains unchanged under -wise
interleaving for an integer . This generalizes a recent result of Dinur et
al \cite{DGKS}, who proved such a result for interleaved Hadamard codes
(equivalently, linear transformations). Using the notion of generalized Hamming
weights, we give better list size bounds for {\em both} tensoring and
interleaving of binary linear codes. By analyzing the weight distribution of
these codes, we reduce the task of bounding the list size to bounding the
number of close-by low-rank codewords. For decoding linear transformations,
using rank-reduction together with other ideas, we obtain list size bounds that
are tight over small fields.Comment: 32 page
- …