628 research outputs found
Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas
We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function within -error over
the uniform distribution: 1. If is submodular, then it is -close
to a function of variables.
This is an exponential improvement over previously known results. We note that
variables are necessary even for linear
functions. 2. If is fractionally subadditive (XOS) it is -close
to a function of variables. This result holds for all
functions with low total -influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time . For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative factor approximation with
probability at least over the target distribution. Our uniform
distribution algorithm runs in time .
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201
Technological Dynamics and Social Capability: Comparing U.S. States and European Nations
This paper analyzes factors that shape the technological capabilities of individual U.S. states and European countries, which are arguably comparable policy units. The analysis demonstrates convergence in technological capabilities from 2000 to 2007. The results indicate that social capabilities, such as a highly educated labor force, an egalitarian distribution of income, a participatory democracy and prevalence of public safety, condition the growth of technological capability. The analysis also considers other aspects of territorial dynamics, such as the possible effects of spatial agglomeration, urbanization economies, and differences in industrial specialization and knowledge spillovers from neighboring regions.innovation; technological capabilities; European Union; United States Disclaimer: All
Technological Dynamics and Social Capability: Comparing U.S. States and European Nations
This paper analyzes factors that shape the technological capabilities of individual U.S. states and European countries, which are arguably comparable policy units. The analysis demonstrates convergence in technological capabilities from 2000 to 2007. The results indicate that social capabilities, such as a highly educated labor force, an egalitarian distribution of income, a participatory democracy and prevalence of public safety, condition the growth of technological capability. The analysis also considers other aspects of territorial dynamics, such as the possible effects of spatial agglomeration, urbanization economies, and differences in industrial specialization and knowledge spillovers from neighboring regions.innovation, technological capabilities, European Union, United States
The Limitations of Optimization from Samples
In this paper we consider the following question: can we optimize objective
functions from the training data we use to learn them? We formalize this
question through a novel framework we call optimization from samples (OPS). In
OPS, we are given sampled values of a function drawn from some distribution and
the objective is to optimize the function under some constraint.
While there are interesting classes of functions that can be optimized from
samples, our main result is an impossibility. We show that there are classes of
functions which are statistically learnable and optimizable, but for which no
reasonable approximation for optimization from samples is achievable. In
particular, our main result shows that there is no constant factor
approximation for maximizing coverage functions under a cardinality constraint
using polynomially-many samples drawn from any distribution.
We also show tight approximation guarantees for maximization under a
cardinality constraint of several interesting classes of functions including
unit-demand, additive, and general monotone submodular functions, as well as a
constant factor approximation for monotone submodular functions with bounded
curvature
- …