3 research outputs found

    On the Complexity of the Inverse Semivalue Problem for Weighted Voting Games

    Full text link
    Weighted voting games are a family of cooperative games, typically used to model voting situations where a number of agents (players) vote against or for a proposal. In such games, a proposal is accepted if an appropriately weighted sum of the votes exceeds a prespecified threshold. As the influence of a player over the voting outcome is not in general proportional to her assigned weight, various power indices have been proposed to measure each player's influence. The inverse power index problem is the problem of designing a weighted voting game that achieves a set of target influences according to a predefined power index. In this work, we study the computational complexity of the inverse problem when the power index belongs to the class of semivalues. We prove that the inverse problem is computationally intractable for a broad family of semivalues, including all regular semivalues. As a special case of our general result, we establish computational hardness of the inverse problem for the Banzhaf indices and the Shapley values, arguably the most popular power indices.Comment: To appear in AAAI 201

    Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

    Full text link
    We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning dd-dimensional halfspaces on the unit ball within misclassification error α⋅OPTγ+ϵ\alpha \cdot \mathrm{OPT}_{\gamma} + \epsilon, where OPTγ\mathrm{OPT}_{\gamma} is the optimal γ\gamma-margin error rate and α≥1\alpha \geq 1 is the approximation ratio. We give learning algorithms and computational hardness results for this problem, for all values of the approximation ratio α≥1\alpha \geq 1, that are nearly-matching for a range of parameters. Specifically, for the natural setting that α\alpha is any constant bigger than one, we provide an essentially tight complexity characterization. On the positive side, we give an α=1.01\alpha = 1.01-approximate proper learner that uses O(1/(ϵ2γ2))O(1/(\epsilon^2\gamma^2)) samples (which is optimal) and runs in time poly(d/ϵ)⋅2O~(1/γ2)\mathrm{poly}(d/\epsilon) \cdot 2^{\tilde{O}(1/\gamma^2)}. On the negative side, we show that {\em any} constant factor approximate proper learner has runtime poly(d/ϵ)⋅2(1/γ)2−o(1)\mathrm{poly}(d/\epsilon) \cdot 2^{(1/\gamma)^{2-o(1)}}, assuming the Exponential Time Hypothesis
    corecore