227,526 research outputs found

    Neural network approximations to posterior densities: an analytical approach

    Get PDF
    In Hoogerheide, Kaashoek and Van Dijk (2002) the class of neural networksampling methods is introduced to sample from a target (posterior)distribution that may be multi-modal or skew, or exhibit strong correlationamong the parameters. In these methods the neural network is used as animportance function in IS or as a candidate density in MH. In this note wesuggest an analytical approach to estimate the moments of a certain (target)distribution, where `analytical' refers to the fact that no samplingalgorithm like MH or IS is needed.We show an example in which our analyticalapproach is feasible, even in a case where a `standard' Gibbs approach wouldfail or be extremely slow.Markov chain Monte Carlo;Bayesian inference;importance sampling;neural networks

    Detecting latent communities in network formation models

    Get PDF
    This paper proposes a logistic undirected network formation model which allows for assortative matching on observed individual characteristics and the presence of edge-wise fixed effects. We model the coefficients of observed characteristics to have a latent community structure and the edge-wise fixed effects to be of low rank. We propose a multi-step estimation procedure involving nuclear norm regularization, sample splitting, iterative logistic regression and spectral clustering to detect the latent communities. We show that the latent communities can be exactly recovered when the expected degree of the network is of order log n or higher, where n is the number of nodes in the network. The finite sample performance of the new estimation and inference methods is illustrated through both simulated and real datasets.Comment: 63 page

    Learning to Weight Samples for Dynamic Early-exiting Networks

    Full text link
    Early exiting is an effective paradigm for improving the inference efficiency of deep networks. By constructing classifiers with varying resource demands (the exits), such networks allow easy samples to be output at early exits, removing the need for executing deeper layers. While existing works mainly focus on the architectural design of multi-exit networks, the training strategies for such models are largely left unexplored. The current state-of-the-art models treat all samples the same during training. However, the early-exiting behavior during testing has been ignored, leading to a gap between training and testing. In this paper, we propose to bridge this gap by sample weighting. Intuitively, easy samples, which generally exit early in the network during inference, should contribute more to training early classifiers. The training of hard samples (mostly exit from deeper layers), however, should be emphasized by the late classifiers. Our work proposes to adopt a weight prediction network to weight the loss of different training samples at each exit. This weight prediction network and the backbone model are jointly optimized under a meta-learning framework with a novel optimization objective. By bringing the adaptive behavior during inference into the training phase, we show that the proposed weighting mechanism consistently improves the trade-off between classification accuracy and inference efficiency. Code is available at https://github.com/LeapLabTHU/L2W-DEN.Comment: ECCV 202
    • 

    corecore