21,733 research outputs found

    Certified lattice reduction

    Get PDF
    Quadratic form reduction and lattice reduction are fundamental tools in computational number theory and in computer science, especially in cryptography. The celebrated Lenstra-Lenstra-Lov\'asz reduction algorithm (so-called LLL) has been improved in many ways through the past decades and remains one of the central methods used for reducing integral lattice basis. In particular, its floating-point variants-where the rational arithmetic required by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are now the fastest known. However, the systematic study of the reduction theory of real quadratic forms or, more generally, of real lattices is not widely represented in the literature. When the problem arises, the lattice is usually replaced by an integral approximation of (a multiple of) the original lattice, which is then reduced. While practically useful and proven in some special cases, this method doesn't offer any guarantee of success in general. In this work, we present an adaptive-precision version of a generalized LLL algorithm that covers this case in all generality. In particular, we replace floating-point arithmetic by Interval Arithmetic to certify the behavior of the algorithm. We conclude by giving a typical application of the result in algebraic number theory for the reduction of ideal lattices in number fields.Comment: 23 page

    A static cost analysis for a higher-order language

    Full text link
    We develop a static complexity analysis for a higher-order functional language with structural list recursion. The complexity of an expression is a pair consisting of a cost and a potential. The former is defined to be the size of the expression's evaluation derivation in a standard big-step operational semantics. The latter is a measure of the "future" cost of using the value of that expression. A translation function tr maps target expressions to complexities. Our main result is the following Soundness Theorem: If t is a term in the target language, then the cost component of tr(t) is an upper bound on the cost of evaluating t. The proof of the Soundness Theorem is formalized in Coq, providing certified upper bounds on the cost of any expression in the target language.Comment: Final versio

    Towards Fast Computation of Certified Robustness for ReLU Networks

    Full text link
    Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17]. Although finding the exact minimum adversarial distortion is hard, giving a certified lower bound of the minimum distortion is possible. Current available methods of computing such a bound are either time-consuming or delivering low quality bounds that are too loose to be useful. In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms Fast-Lin and Fast-Lip that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions Fast-Lin, or by bounding the local Lipschitz constant Fast-Lip. Experiments show that (1) our proposed methods deliver bounds close to (the gap is 2-3X) exact minimum distortion found by Reluplex in small MNIST networks while our algorithms are more than 10,000 times faster; (2) our methods deliver similar quality of bounds (the gap is within 35% and usually around 10%; sometimes our bounds are even better) for larger networks compared to the methods based on solving linear programming problems but our algorithms are 33-14,000 times faster; (3) our method is capable of solving large MNIST and CIFAR networks up to 7 layers with more than 10,000 neurons within tens of seconds on a single CPU core. In addition, we show that, in fact, there is no polynomial time algorithm that can approximately find the minimum 1\ell_1 adversarial distortion of a ReLU network with a 0.99lnn0.99\ln n approximation ratio unless NP\mathsf{NP}=P\mathsf{P}, where nn is the number of neurons in the network.Comment: Tsui-Wei Weng and Huan Zhang contributed equall
    corecore