8 research outputs found

    Faster FPTASes for counting and random generation of Knapsack solutions

    Get PDF
    In the #P-complete problem of counting 0/1 Knapsack solutions, the input consists of a sequence of n nonnegative integer weights w1,…,wn and an integer C, and we have to find the number of subsequences (subsets of indices) with total weight at most C. We give faster and simpler fully polynomial-time approximation schemes (FPTASes) for this problem, and for its random generation counterpart. Our method is based on dynamic programming and discretization of large numbers through floating-point arithmetic. We improve both deterministic counting FPTASes from Gopalan et al. (2011) [9], Štefankovič et al. (2012) [6] and the randomized counting and random generation algorithms in Dyer (2003) [5]. Our method is general, and it can be directly applied on top of combinatorial decompositions (such as dynamic programming solutions) of various problems. For example, we also improve the complexity of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG.Peer reviewe

    A Deterministic Fully Polynomial Time Approximation Scheme For Counting Integer Knapsack Solutions Made Easy

    Get PDF
    Given n elements with nonnegative integer weights w=(w_1,...,w_n), an integer capacity C and positive integer ranges u=(u_1,...,u_n), we consider the counting version of the classic integer knapsack problem: find the number of distinct multisets whose weights add up to at most C. We give a deterministic algorithm that estimates the number of solutions to within relative error epsilon in time polynomial in n, log U and 1/epsilon, where U=max_i u_i. More precisely, our algorithm runs in O((n^3 log^2 U)/epsilon) log (n log U)/epsilon) time. This is an improvement of n^2 and 1/epsilon (up to log terms) over the best known deterministic algorithm by Gopalan et al. [FOCS, (2011), pp. 817-826]. Our algorithm is relatively simple, and its analysis is rather elementary. Our results are achieved by means of a careful formulation of the problem as a dynamic program, using the notion of binding constraints

    On Computing Probabilistic Abductive Explanations

    Full text link
    The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive {\delta}, a set S of features is {\delta}-relevant if, when the features in S are fixed, the probability of getting the target class exceeds {\delta}. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.Comment: arXiv admin note: text overlap with arXiv:2207.04748, arXiv:2205.0956

    Model Interpretability through the Lens of Computational Complexity

    Get PDF
    In spite of several claims stating that some models are more interpretable than others -- e.g., "linear models are more interpretable than deep neural networks" -- we still lack a principled notion of interpretability to formally compare among different classes of models. We make a step towards such a notion by studying whether folklore interpretability claims have a correlate in terms of computational complexity theory. We focus on local post-hoc explainability queries that, intuitively, attempt to answer why individual inputs are classified in a certain way by a given model. In a nutshell, we say that a class C1\mathcal{C}_1 of models is more interpretable than another class C2\mathcal{C}_2, if the computational complexity of answering post-hoc queries for models in C2\mathcal{C}_2 is higher than for those in C1\mathcal{C}_1. We prove that this notion provides a good theoretical counterpart to current beliefs on the interpretability of models; in particular, we show that under our definition and assuming standard complexity-theoretical assumptions (such as P\neqNP), both linear and tree-based models are strictly more interpretable than neural networks. Our complexity analysis, however, does not provide a clear-cut difference between linear and tree-based models, as we obtain different results depending on the particular post-hoc explanations considered. Finally, by applying a finer complexity analysis based on parameterized complexity, we are able to prove a theoretical result suggesting that shallow neural networks are more interpretable than deeper ones.Comment: 36 pages, including 9 pages of main text. This is the arXiv version of the NeurIPS'2020 paper. Except from minor differences that could be introduced by the publisher, the only difference should be the addition of the appendix, which contains all the proofs that do not appear in the main tex

    Faster FPTASes for Counting and Random Generation of Knapsack Solutions

    No full text
    We give faster and simpler fully polynomial-time approximation schemes (FPTASes) for the #P-complete problem of counting 0/1 Knapsack solutions, and for its random generation counterpart. Our method is based on dynamic programming and discretization of large numbers through floating-point arithmetic. We improve both deterministic counting FPTASes in (Gopalan et al., FOCS 2011), (Štefankovič et al., SIAM J. Comput. 2012) and the randomized counting and random generation algorithms in (Dyer, STOC 2003). We also improve the complexity of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore