5,592 research outputs found

    High precision variational Bayesian inference of sparse linear networks

    Get PDF
    Sparse networks can be found in a wide range of applications, such as biological and communication networks. Inference of such networks from data has been receiving considerable attention lately, mainly driven by the need to understand and control internal working mechanisms. However, while most available methods have been successful at predicting many correct links, they also tend to infer many incorrect links. Precision is the ratio between the number of correctly inferred links and all inferred links, and should ideally be close to 100%. For example, 50% precision means that half of inferred links are incorrect, and there is only a 50% chance of picking a correct one. In contrast, this paper infers links of discrete-time linear networks with very high precision, based on variational Bayesian inference and Gaussian processes. Our method can handle limited datasets, does not require full-state measurements and effectively promotes both system stability and network sparsity. On several of examples, Monte Carlo simulations illustrate that our method consistently has 100% or nearly 100% precision, even in the presence of noise and hidden nodes, outperforming several state-of-the-art methods. The method should be applicable to a wide range of network inference contexts, including biological networks and power systems

    High precision variational Bayesian inference of sparse linear networks

    Get PDF
    Sparse networks can be found in a wide range of applications, such as biological and communication networks. Inference of such networks from data has been receiving considerable attention lately, mainly driven by the need to understand and control internal working mechanisms. However, while most available methods have been successful at predicting many correct links, they also tend to infer many incorrect links. Precision is the ratio between the number of correctly inferred links and all inferred links, and should ideally be close to 100%. For example, 50% precision means that half of inferred links are incorrect, and there is only a 50% chance of picking a correct one. In contrast, this paper infers links of discrete-time linear networks with very high precision, based on variational Bayesian inference and Gaussian processes. Our method can handle limited datasets, does not require full-state measurements and effectively promotes both system stability and network sparsity. On several of examples, Monte Carlo simulations illustrate that our method consistently has 100% or nearly 100% precision, even in the presence of noise and hidden nodes, outperforming several state-of-the-art methods. The method should be applicable to a wide range of network inference contexts, including biological networks and power systems

    Bayesian Compression for Deep Learning

    Get PDF
    Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.Comment: Published as a conference paper at NIPS 201
    corecore