4,387 research outputs found

    From average case complexity to improper learning complexity

    Full text link
    The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {\em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from NP\mathbf{NP}-hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning DNF\mathrm{DNF}'s is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of ω(1)\omega(1) halfspaces is hard.Comment: 34 page

    Incentive Stackelberg Mean-payoff Games

    Get PDF
    We introduce and study incentive equilibria for multi-player meanpayoff games. Incentive equilibria generalise well-studied solution concepts such as Nash equilibria and leader equilibria (also known as Stackelberg equilibria). Recall that a strategy profile is a Nash equilibrium if no player can improve his payoff by changing his strategy unilaterally. In the setting of incentive and leader equilibria, there is a distinguished player called the leader who can assign strategies to all other players, referred to as her followers. A strategy profile is a leader strategy profile if no player, except for the leader, can improve his payoff by changing his strategy unilaterally, and a leader equilibrium is a leader strategy profile with a maximal return for the leader. In the proposed case of incentive equilibria, the leader can additionally influence the behaviour of her followers by transferring parts of her payoff to her followers. The ability to incentivise her followers provides the leader with more freedom in selecting strategy profiles, and we show that this can indeed improve the payoff for the leader in such games. The key fundamental result of the paper is the existence of incentive equilibria in mean-payoff games. We further show that the decision problem related to constructing incentive equilibria is NP-complete. On a positive note, we show that, when the number of players is fixed, the complexity of the problem falls in the same class as two-player mean-payoff games. We also present an implementation of the proposed algorithms, and discuss experimental results that demonstrate the feasibility of the analysis of medium sized games.Comment: 15 pages, references, appendix, 5 figure

    Counting hypergraph matchings up to uniqueness threshold

    Get PDF
    We study the problem of approximately counting matchings in hypergraphs of bounded maximum degree and maximum size of hyperedges. With an activity parameter λ\lambda, each matching MM is assigned a weight λM\lambda^{|M|}. The counting problem is formulated as computing a partition function that gives the sum of the weights of all matchings in a hypergraph. This problem unifies two extensively studied statistical physics models in approximate counting: the hardcore model (graph independent sets) and the monomer-dimer model (graph matchings). For this model, the critical activity λc=ddk(d1)d+1\lambda_c= \frac{d^d}{k (d-1)^{d+1}} is the threshold for the uniqueness of Gibbs measures on the infinite (d+1)(d+1)-uniform (k+1)(k+1)-regular hypertree. Consider hypergraphs of maximum degree at most k+1k+1 and maximum size of hyperedges at most d+1d+1. We show that when λ<λc\lambda < \lambda_c, there is an FPTAS for computing the partition function; and when λ=λc\lambda = \lambda_c, there is a PTAS for computing the log-partition function. These algorithms are based on the decay of correlation (strong spatial mixing) property of Gibbs distributions. When λ>2λc\lambda > 2\lambda_c, there is no PRAS for the partition function or the log-partition function unless NP==RP. Towards obtaining a sharp transition of computational complexity of approximate counting, we study the local convergence from a sequence of finite hypergraphs to the infinite lattice with specified symmetry. We show a surprising connection between the local convergence and the reversibility of a natural random walk. This leads us to a barrier for the hardness result: The non-uniqueness of infinite Gibbs measure is not realizable by any finite gadgets

    Weak Decay of Lambda Hypernuclei

    Full text link
    In this review we discuss the present status of strange nuclear physics, with special attention to the weak decay of Lambda hypernuclei. The models proposed for the evaluation of the Lambda decay widths are summarized and their results are compared with the data. Despite the recent intensive investigations, the main open problem remains a sound theoretical interpretation of the large experimental values of the ratio G_n/G_p. Although recent works offer a step forward in the solution of the puzzle, further efforts must be invested in order to understand the detailed dynamics of the non-mesonic decay. Even if, by means of single nucleon spectra measurements, the error bars on G_n/G_p have been considerably reduced very recently at KEK, a clean extraction of G_n/G_p is needed. What is missing at present, but planned for the next future, are measurements of 1) nucleon energy spectra in double coincidence and 2) nucleon angular correlations: such observations allow to disentangle the nucleons produced in one- and two-body induced decays and lead to a direct determination of G_n/G_p. For the asymmetric non-mesonic decay of polarized hypernuclei the situation is even more puzzling. Indeed, strong inconsistencies appear already among data. A recent experiment obtained a positive intrinsic Lambda asymmetry parameter, a_{Lambda}, for 5_{Lambda}He. This is in complete disagreement with a previous measurement, which obtained a large and negative a_{Lambda} for p-shell hypernuclei, and with theory, which predicts a negative value moderately dependent on nuclear structure effects. Also in this case, improved experiments establishing with certainty the sign and magnitude of a_{Lambda} for s- and p-shell hypernuclei will provide a guidance for a deeper understanding of hypernuclear dynamics and decay mechanisms.Comment: 129 pages, 21 figures, Submitted to Phys. Rep

    Average-case Hardness of RIP Certification

    Get PDF
    The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs

    Spectral aspects of the Berezin transform

    Get PDF
    We discuss the Berezin transform, a Markov operator associated to positive operator valued measures (POVMs), in a number of contexts including the Berezin-Toeplitz quantization, Donaldson's dynamical system on the space of Hermitian products on a complex vector space, representations of finite groups, and quantum noise. In particular, we calculate the spectral gap for quantization in terms of the fundamental tone of the phase space. Our results confirm a prediction of Donaldson for the spectrum of the Q-operator on Kahler manifolds with constant scalar curvature. Furthermore, viewing POVMs as data clouds, we study their spectral features via geometry of measure metric spaces and the diffusion distance.Comment: Final version, 47 pages. Section on Donaldson's iterations revise
    corecore