638 research outputs found

    Note on the integer geometry of bitwise XOR

    Get PDF
    AbstractWe consider the set N of non-negative integers together with a distance d defined as follows: given two integers x,y∈N, d(x,y) is, in binary notation, the result of performing, digit by digit, the “XOR” operation on (the binary notations of) x and y. Dawson, in Combinatorial Mathematics VIII, Geelong, 1980, Lecture Notes in Mathematics, 884 (1981) 136, considers this geometry and suggests the following construction: given k different integers x1,…,xk∈N, let Vi be the set of integers closer to xi than to any xj with j≠i, for i,j=1,…,k. Let V=(V1,…,Vk) and X=(x1,…,xk). V is a partition of {0,1,…,2n−1} which, in general, does not determine X.In this paper, we characterize the convex sets of this geometry: they are exactly the line segments. Given X and the partition V determined by X, we also characterize in easy terms the ordered sets Y=(y1,…,yk) that determine the same partition V. This, in particular, extends one of the main results of Combinatorial Mathematics VIII, Geelong, 1980, Lecture Notes in Mathematics, 884 (1981) 136

    Opening the AI black box: program synthesis via mechanistic interpretability

    Full text link
    We present MIPS, a novel method for program synthesis based on automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm. As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub. We discuss opportunities and challenges for scaling up this approach to make machine-learned models more interpretable and trustworthy.Comment: 24 page
    corecore