628 research outputs found

    Optimal Bounds on Approximation of Submodular and XOS Functions by Juntas

    Full text link
    We investigate the approximability of several classes of real-valued functions by functions of a small number of variables ({\em juntas}). Our main results are tight bounds on the number of variables required to approximate a function f:{0,1}n[0,1]f:\{0,1\}^n \rightarrow [0,1] within 2\ell_2-error ϵ\epsilon over the uniform distribution: 1. If ff is submodular, then it is ϵ\epsilon-close to a function of O(1ϵ2log1ϵ)O(\frac{1}{\epsilon^2} \log \frac{1}{\epsilon}) variables. This is an exponential improvement over previously known results. We note that Ω(1ϵ2)\Omega(\frac{1}{\epsilon^2}) variables are necessary even for linear functions. 2. If ff is fractionally subadditive (XOS) it is ϵ\epsilon-close to a function of 2O(1/ϵ2)2^{O(1/\epsilon^2)} variables. This result holds for all functions with low total 1\ell_1-influence and is a real-valued analogue of Friedgut's theorem for boolean functions. We show that 2Ω(1/ϵ)2^{\Omega(1/\epsilon)} variables are necessary even for XOS functions. As applications of these results, we provide learning algorithms over the uniform distribution. For XOS functions, we give a PAC learning algorithm that runs in time 2poly(1/ϵ)poly(n)2^{poly(1/\epsilon)} poly(n). For submodular functions we give an algorithm in the more demanding PMAC learning model (Balcan and Harvey, 2011) which requires a multiplicative 1+γ1+\gamma factor approximation with probability at least 1ϵ1-\epsilon over the target distribution. Our uniform distribution algorithm runs in time 2poly(1/(γϵ))poly(n)2^{poly(1/(\gamma\epsilon))} poly(n). This is the first algorithm in the PMAC model that over the uniform distribution can achieve a constant approximation factor arbitrarily close to 1 for all submodular functions. As follows from the lower bounds in (Feldman et al., 2013) both of these algorithms are close to optimal. We also give applications for proper learning, testing and agnostic learning with value queries of these classes.Comment: Extended abstract appears in proceedings of FOCS 201

    Technological Dynamics and Social Capability: Comparing U.S. States and European Nations

    Get PDF
    This paper analyzes factors that shape the technological capabilities of individual U.S. states and European countries, which are arguably comparable policy units. The analysis demonstrates convergence in technological capabilities from 2000 to 2007. The results indicate that social capabilities, such as a highly educated labor force, an egalitarian distribution of income, a participatory democracy and prevalence of public safety, condition the growth of technological capability. The analysis also considers other aspects of territorial dynamics, such as the possible effects of spatial agglomeration, urbanization economies, and differences in industrial specialization and knowledge spillovers from neighboring regions.innovation; technological capabilities; European Union; United States Disclaimer: All

    Technological Dynamics and Social Capability: Comparing U.S. States and European Nations

    Get PDF
    This paper analyzes factors that shape the technological capabilities of individual U.S. states and European countries, which are arguably comparable policy units. The analysis demonstrates convergence in technological capabilities from 2000 to 2007. The results indicate that social capabilities, such as a highly educated labor force, an egalitarian distribution of income, a participatory democracy and prevalence of public safety, condition the growth of technological capability. The analysis also considers other aspects of territorial dynamics, such as the possible effects of spatial agglomeration, urbanization economies, and differences in industrial specialization and knowledge spillovers from neighboring regions.innovation, technological capabilities, European Union, United States

    The Limitations of Optimization from Samples

    Full text link
    In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature
    corecore