1,615 research outputs found

    Tight Combinatorial Generalization Bounds for Threshold Conjunction Rules

    Full text link
    Abstract. We propose a combinatorial technique for obtaining tight data dependent generalization bounds based on a splitting and connec-tivity graph (SC-graph) of the set of classifiers. We apply this approach to a parametric set of conjunctive rules and propose an algorithm for effective SC-bound computation. Experiments on 6 data sets from the UCI ML Repository show that SC-bound helps to learn more reliable rule-based classifiers as compositions of less overfitted rules

    The intersection of two halfspaces has high threshold degree

    Full text link
    The threshold degree of a Boolean function f:{0,1}^n->{-1,+1} is the least degree of a real polynomial p such that f(x)=sgn p(x). We construct two halfspaces on {0,1}^n whose intersection has threshold degree Theta(sqrt n), an exponential improvement on previous lower bounds. This solves an open problem due to Klivans (2002) and rules out the use of perceptron-based techniques for PAC learning the intersection of two halfspaces, a central unresolved challenge in computational learning. We also prove that the intersection of two majority functions has threshold degree Omega(log n), which is tight and settles a conjecture of O'Donnell and Servedio (2003). Our proof consists of two parts. First, we show that for any nonconstant Boolean functions f and g, the intersection f(x)^g(y) has threshold degree O(d) if and only if ||f-F||_infty + ||g-G||_infty < 1 for some rational functions F, G of degree O(d). Second, we settle the least degree required for approximating a halfspace and a majority function to any given accuracy by rational functions. Our technique further allows us to make progress on Aaronson's challenge (2008) and contribute strong direct product theorems for polynomial representations of composed Boolean functions of the form F(f_1,...,f_n). In particular, we give an improved lower bound on the approximate degree of the AND-OR tree.Comment: Full version of the FOCS'09 pape

    Understanding Space in Proof Complexity: Separations and Trade-offs via Substitutions

    Full text link
    For current state-of-the-art DPLL SAT-solvers the two main bottlenecks are the amounts of time and memory used. In proof complexity, these resources correspond to the length and space of resolution proofs. There has been a long line of research investigating these proof complexity measures, but while strong results have been established for length, our understanding of space and how it relates to length has remained quite poor. In particular, the question whether resolution proofs can be optimized for length and space simultaneously, or whether there are trade-offs between these two measures, has remained essentially open. In this paper, we remedy this situation by proving a host of length-space trade-off results for resolution. Our collection of trade-offs cover almost the whole range of values for the space complexity of formulas, and most of the trade-offs are superpolynomial or even exponential and essentially tight. Using similar techniques, we show that these trade-offs in fact extend to the exponentially stronger k-DNF resolution proof systems, which operate with formulas in disjunctive normal form with terms of bounded arity k. We also answer the open question whether the k-DNF resolution systems form a strict hierarchy with respect to space in the affirmative. Our key technical contribution is the following, somewhat surprising, theorem: Any CNF formula F can be transformed by simple variable substitution into a new formula F' such that if F has the right properties, F' can be proven in essentially the same length as F, whereas on the other hand the minimal number of lines one needs to keep in memory simultaneously in any proof of F' is lower-bounded by the minimal number of variables needed simultaneously in any proof of F. Applying this theorem to so-called pebbling formulas defined in terms of pebble games on directed acyclic graphs, we obtain our results.Comment: This paper is a merged and updated version of the two ECCC technical reports TR09-034 and TR09-047, and it hence subsumes these two report

    Two Combinatorial Optimization Problems at the Interface of Computer Science and Operations Research

    Get PDF
    Solving large combinatorial optimization problems is a ubiquitous task across multiple disciplines. Developing efficient procedures for solving these problems has been of great interest to both researchers and practitioners. Over the last half century, vast amounts of research have been devoted to studying various methods in tackling these problems. These methods can be divided into two categories, heuristic methods and exact algorithms. Heuristic methods can often lead to near optimal solutions in a relatively time efficient manner, but provide no guarantees on optimality. Exact algorithms guarantee optimality, but are often very time consuming. This dissertation focuses on designing efficient exact algorithms that can solve larger problem instances with faster computational time. A general framework for an exact algorithm, called the Branch, Bound, and Remember algorithm, is proposed in this dissertation. Three variations of single machine scheduling problems are presented and used to evaluate the efficiency of the Branch, Bound, and Remember algorithm. The computational results show that the Branch, Bound, and Remember algorithms outperforms the best known algorithms in the literature. While the Branch, Bound, and Remember algorithm can be used for solving combinatorial optimization problems, it does not address the subject of post-optimality selection after the combinatorial optimization problem is solved. Post-optimality selection is a common problem in multi-objective combinatorial optimization problems where there exists a set of optimal solutions called Pareto optimal (non-dominated) solutions. Post-optimality selection is the process of selecting the best solutions within the Pareto optimal solution set. In many real-world applications, a Pareto solution set (either optimal or near-optimal) can be extremely large, and can be very challenging for a decision maker to evaluate and select the best solution. To address the post-optimality selection problem, this dissertation also proposes a new discrete optimization problem to help the decision-maker to obtain an optimal preferred subset of Pareto optimal solutions. This discrete optimization problem is proven to be NP-hard. To solve this problem, exact algorithms and heuristic methods are presented. Different multi-objective problems with various numbers of objectives and constraints are used to compare the performances of the proposed algorithms and heuristics

    A Survey on Approximation in Parameterized Complexity: Hardness and Algorithms

    Get PDF
    Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions

    Fair Payments for Efficient Allocations in Public Sector Combinatorial Auctions

    Get PDF
    Motivated by the increasing use of auctions by government agencies, we consider the problem of fairly pricing public goods in a combinatorial auction. A well-known problem with the incentive-compatible Vickrey-Clarke-Groves (VCG) auction mechanism is that the resulting prices may not be in the core. Loosely speaking, this means the payments of the winners could be so low, that there are losing bidders who would have been willing to pay more than the payments of the winning bidders. Clearly, this ``unfair\u27\u27 outcome is unacceptable for a public-sector auction. Proxy-based combinatorial auctions, in which each bidder submits several package bids to a proxy, result in efficient outcomes and bidder-Pareto-optimal core-payments by winners, thus offering a viable practical alternative to address this problem. This paper confronts two critical issues facing the proxy-auction. First, motivated to minimize a bidder\u27s ability to benefit through strategic manipulation (through collusive agreement or unilateral action), we demonstrate the strength of a mechanism that minimizes total payments among all possible proxy auction outcomes, narrowing the previously broad solution concept. Secondly, we address the computational difficulties of achieving these outcomes with a constraint-generation approach, promising to broaden the range of applications for which the proxy-auction achieves a comfortably rapid solution
    • …
    corecore