13,543 research outputs found

    Algorithms for Extended Alpha-Equivalence and Complexity

    Get PDF
    Equality of expressions in lambda-calculi, higher-order programming languages, higher-order programming calculi and process calculi is defined as alpha-equivalence. Permutability of bindings in let-constructs and structural congruence axioms extend alpha-equivalence. We analyse these extended alpha-equivalences and show that there are calculi with polynomial time algorithms, that a multiple-binding “let ” may make alpha-equivalence as hard as finding graph-isomorphisms, and that the replication operator in the pi-calculus may lead to an EXPSPACE-hard alpha-equivalence problem

    Sample Complexity Bounds on Differentially Private Learning via Communication Complexity

    Full text link
    In this work we analyze the sample complexity of classification by differentially private algorithms. Differential privacy is a strong and well-studied notion of privacy introduced by Dwork et al. (2006) that ensures that the output of an algorithm leaks little information about the data point provided by any of the participating individuals. Sample complexity of private PAC and agnostic learning was studied in a number of prior works starting with (Kasiviswanathan et al., 2008) but a number of basic questions still remain open, most notably whether learning with privacy requires more samples than learning without privacy. We show that the sample complexity of learning with (pure) differential privacy can be arbitrarily higher than the sample complexity of learning without the privacy constraint or the sample complexity of learning with approximate differential privacy. Our second contribution and the main tool is an equivalence between the sample complexity of (pure) differentially private learning of a concept class CC (or SCDP(C)SCDP(C)) and the randomized one-way communication complexity of the evaluation problem for concepts from CC. Using this equivalence we prove the following bounds: 1. SCDP(C)=Ω(LDim(C))SCDP(C) = \Omega(LDim(C)), where LDim(C)LDim(C) is the Littlestone's (1987) dimension characterizing the number of mistakes in the online-mistake-bound learning model. Known bounds on LDim(C)LDim(C) then imply that SCDP(C)SCDP(C) can be much higher than the VC-dimension of CC. 2. For any tt, there exists a class CC such that LDim(C)=2LDim(C)=2 but SCDP(C)tSCDP(C) \geq t. 3. For any tt, there exists a class CC such that the sample complexity of (pure) α\alpha-differentially private PAC learning is Ω(t/α)\Omega(t/\alpha) but the sample complexity of the relaxed (α,β)(\alpha,\beta)-differentially private PAC learning is O(log(1/β)/α)O(\log(1/\beta)/\alpha). This resolves an open problem of Beimel et al. (2013b).Comment: Extended abstract appears in Conference on Learning Theory (COLT) 201

    Nominal Unification of Higher Order Expressions with Recursive Let

    Get PDF
    A sound and complete algorithm for nominal unification of higher-order expressions with a recursive let is described, and shown to run in non-deterministic polynomial time. We also explore specializations like nominal letrec-matching for plain expressions and for DAGs and determine the complexity of corresponding unification problems.Comment: Pre-proceedings paper presented at the 26th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2016), Edinburgh, Scotland UK, 6-8 September 2016 (arXiv:1608.02534
    corecore