79,162 research outputs found

    On the relative proof complexity of deep inference via atomic flows

    Get PDF
    We consider the proof complexity of the minimal complete fragment, KS, of standard deep inference systems for propositional logic. To examine the size of proofs we employ atomic flows, diagrams that trace structural changes through a proof but ignore logical information. As results we obtain a polynomial simulation of versions of Resolution, along with some extensions. We also show that these systems, as well as bounded-depth Frege systems, cannot polynomially simulate KS, by giving polynomial-size proofs of certain variants of the propositional pigeonhole principle in KS.Comment: 27 pages, 2 figures, full version of conference pape

    On the proof complexity of deep inference

    Get PDF
    International audienceWe obtain two results about the proof complexity of deep inference: (1) Deep-inference proof systems are as powerful as Frege ones, even when both are extended with the Tseitin extension rule or with the substitution rule; (2) there are analytic deep-inference proof systems that exhibit an exponential speedup over analytic Gentzen proof systems that they polynomially simulate

    Accelerating Training of Deep Neural Networks via Sparse Edge Processing

    Full text link
    We propose a reconfigurable hardware architecture for deep neural networks (DNNs) capable of online training and inference, which uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements. This novel architecture introduces the notion of edge-processing to provide flexibility and combines junction pipelining and operational parallelization to speed up training. The overall effect is to reduce network complexity by factors up to 30x and training time by up to 35x relative to GPUs, while maintaining high fidelity of inference results. This has the potential to enable extensive parameter searches and development of the largely unexplored theoretical foundation of DNNs. The architecture automatically adapts itself to different network sizes given available hardware resources. As proof of concept, we show results obtained for different bit widths.Comment: Presented at the 26th International Conference on Artificial Neural Networks (ICANN) 2017 in Alghero, Ital

    On the relative proof complexity of deep inference via atomic flows

    Full text link
    corecore