1,618,425 research outputs found

    Black Box White Arrow

    Full text link
    The present paper proposes a new and systematic approach to the so-called black box group methods in computational group theory. Instead of a single black box, we consider categories of black boxes and their morphisms. This makes new classes of black box problems accessible. For example, we can enrich black box groups by actions of outer automorphisms. As an example of application of this technique, we construct Frobenius maps on black box groups of untwisted Lie type in odd characteristic (Section 6) and inverse-transpose automorphisms on black box groups encrypting (P)SLn(Fq){\rm (P)SL}_n(\mathbb{F}_q). One of the advantages of our approach is that it allows us to work in black box groups over finite fields of big characteristic. Another advantage is explanatory power of our methods; as an example, we explain Kantor's and Kassabov's construction of an involution in black box groups encrypting SL2(2n){\rm SL}_2(2^n). Due to the nature of our work we also have to discuss a few methodological issues of the black box group theory. The paper is further development of our text "Fifty shades of black" [arXiv:1308.2487], and repeats parts of it, but under a weaker axioms for black box groups.Comment: arXiv admin note: substantial text overlap with arXiv:1308.248

    Black-Box Complexity: Breaking the O(nlogn)O(n \log n) Barrier of LeadingOnes

    Full text link
    We show that the unrestricted black-box complexity of the nn-dimensional XOR- and permutation-invariant LeadingOnes function class is O(nlog(n)/loglogn)O(n \log (n) / \log \log n). This shows that the recent natural looking O(nlogn)O(n\log n) bound is not tight. The black-box optimization algorithm leading to this bound can be implemented in a way that only 3-ary unbiased variation operators are used. Hence our bound is also valid for the unbiased black-box complexity recently introduced by Lehre and Witt (GECCO 2010). The bound also remains valid if we impose the additional restriction that the black-box algorithm does not have access to the objective values but only to their relative order (ranking-based black-box complexity).Comment: 12 pages, to appear in the Proc. of Artificial Evolution 2011, LNCS 7401, Springer, 2012. For the unrestricted black-box complexity of LeadingOnes there is now a tight Θ(nloglogn)\Theta(n \log\log n) bound, cf. http://eccc.hpi-web.de/report/2012/087

    Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation

    Full text link
    Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose Distill-and-Compare, a model distillation and comparison approach to audit such models. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by black-box models. We compare the student model trained with distillation to a second un-distilled transparent model trained on ground-truth outcomes, and use differences between the two models to gain insight into the black-box model. Our approach can be applied in a realistic setting, without probing the black-box model API. We demonstrate the approach on four public data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending Club. We also propose a statistical test to determine if a data set is missing key features used to train the black-box model. Our test finds that the ProPublica data is likely missing key feature(s) used in COMPAS.Comment: Camera-ready version for AAAI/ACM AIES 2018. Data and pseudocode at https://github.com/shftan/auditblackbox. Previously titled "Detecting Bias in Black-Box Models Using Transparent Model Distillation". A short version was presented at NIPS 2017 Symposium on Interpretable Machine Learnin
    corecore