499 research outputs found

    ダイナミックバイナリーニューラルネットの学習と安定化

    Get PDF
    A dynamic binary neural network is a simple two-layer network with a delayed feedback and is able to generate various binary periodic orbits. The network is characterized by the signum activationfunction, ternary connection parameters, and integer threshold parameters. The ternary connection brings benefits to network hardware and to computation costs in numerical analysis.The dynamics is simplified into a digital return map on a set of lattice points. We investigate the relation between sparsity of network connection and stability of a target periodic orbit. In order to stabilize a desired binary periodic orbit, we introdece some methods algorithm uses Each individual is evaluated by some feature quantities that characterize the stability of the periodic orbit. Applying the algorithm to a class of periodic orbits that are applicable to control signals of switching power converters, the usefulness of sparsification in stabilization of desired periodicorbit is confirmed.Key Words : Dynamic binary neural networks, Stabilization, Feature quantitie

    Dynamic Spatial Sparsification for Efficient Vision Transformers and Convolutional Neural Networks

    Full text link
    In this paper, we present a new approach for model acceleration by exploiting spatial sparsity in visual data. We observe that the final prediction in vision Transformers is only based on a subset of the most informative tokens, which is sufficient for accurate image recognition. Based on this observation, we propose a dynamic token sparsification framework to prune redundant tokens progressively and dynamically based on the input to accelerate vision Transformers. Specifically, we devise a lightweight prediction module to estimate the importance score of each token given the current features. The module is added to different layers to prune redundant tokens hierarchically. While the framework is inspired by our observation of the sparse attention in vision Transformers, we find the idea of adaptive and asymmetric computation can be a general solution for accelerating various architectures. We extend our method to hierarchical models including CNNs and hierarchical vision Transformers as well as more complex dense prediction tasks that require structured feature maps by formulating a more generic dynamic spatial sparsification framework with progressive sparsification and asymmetric computation for different spatial locations. By applying lightweight fast paths to less informative features and using more expressive slow paths to more important locations, we can maintain the structure of feature maps while significantly reducing the overall computations. Extensive experiments demonstrate the effectiveness of our framework on various modern architectures and different visual recognition tasks. Our results clearly demonstrate that dynamic spatial sparsification offers a new and more effective dimension for model acceleration. Code is available at https://github.com/raoyongming/DynamicViTComment: Accepted to T-PAMI. Journal version of our NeurIPS 2021 work: arXiv:2106.02034. Code is available at https://github.com/raoyongming/DynamicVi

    Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks

    Get PDF
    The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, sometimes even better than, the original dense networks. Sparsity promises to reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field

    Verification of Neural Network Behaviour: Formal Guarantees for Power System Applications

    Full text link
    This paper presents for the first time, to our knowledge, a framework for verifying neural network behavior in power system applications. Up to this moment, neural networks have been applied in power systems as a black-box; this has presented a major barrier for their adoption in practice. Developing a rigorous framework based on mixed integer linear programming, our methods can determine the range of inputs that neural networks classify as safe or unsafe, and are able to systematically identify adversarial examples. Such methods have the potential to build the missing trust of power system operators on neural networks, and unlock a series of new applications in power systems. This paper presents the framework, methods to assess and improve neural network robustness in power systems, and addresses concerns related to scalability and accuracy. We demonstrate our methods on the IEEE 9-bus, 14-bus, and 162-bus systems, treating both N-1 security and small-signal stability.Comment: published in IEEE Transactions on Smart Grid (https://ieeexplore.ieee.org/abstract/document/9141308

    Deep spatial and tonal data optimisation for homogeneous diffusion inpainting

    Get PDF
    Difusion-based inpainting can reconstruct missing image areas with high quality from sparse data, provided that their location and their values are well optimised. This is particularly useful for applications such as image compression, where the original image is known. Selecting the known data constitutes a challenging optimisation problem, that has so far been only investigated with model-based approaches. So far, these methods require a choice between either high quality or high speed since qualitatively convincing algorithms rely on many time-consuming inpaintings. We propose the frst neural network architecture that allows fast optimisation of pixel positions and pixel values for homogeneous difusion inpainting. During training, we combine two optimisation networks with a neural network-based surrogate solver for difusion inpainting. This novel concept allows us to perform backpropagation based on inpainting results that approximate the solution of the inpainting equation. Without the need for a single inpainting during test time, our deep optimisation accelerates data selection by more than four orders of magnitude compared to common model-based approaches. This provides real-time performance with high quality results
    corecore