41,067 research outputs found

    Tensor Monte Carlo: particle methods for the GPU era

    Get PDF
    Multi-sample, importance-weighted variational autoencoders (IWAE) give tighter bounds and more accurate uncertainty estimates than variational autoencoders (VAE) trained with a standard single-sample objective. However, IWAEs scale poorly: as the latent dimensionality grows, they require exponentially many samples to retain the benefits of importance weighting. While sequential Monte-Carlo (SMC) can address this problem, it is prohibitively slow because the resampling step imposes sequential structure which cannot be parallelised, and moreover, resampling is non-differentiable which is problematic when learning approximate posteriors. To address these issues, we developed tensor Monte-Carlo (TMC) which gives exponentially many importance samples by separately drawing KK samples for each of the nn latent variables, then averaging over all KnK^n possible combinations. While the sum over exponentially many terms might seem to be intractable, in many cases it can be computed efficiently as a series of tensor inner-products. We show that TMC is superior to IWAE on a generative model with multiple stochastic layers trained on the MNIST handwritten digit database, and we show that TMC can be combined with standard variance reduction techniques

    Cross-Section Bead Image Prediction in Laser Keyhole Welding of AISI 1020 Steel Using Deep Learning Architectures

    Get PDF
    A deep learning model was applied for predicting a cross-sectional bead image from laser welding process parameters. The proposed model consists of two successive generators. The first generator produces a weld bead segmentation map from laser intensity and interaction time, which is subsequently translated into an optical microscopic (OM) image by the second generator. Both generators exhibit an encoder & x2013;decoder structure based on a convolutional neural network (CNN). In the second generator, a conditional generative adversarial network (cGAN) was additionally employed with multiscale discriminators and residual blocks, considering the size of the OM image. For a training dataset, laser welding experiments with AISI 1020 steel were conducted on a large process window using a 2 KW fiber laser, and a total of 39 process conditions were used for the training. High-resolution OM images were successfully generated, and the predicted bead shapes were reasonably accurate (R-Squared: 89.0 & x0025; for penetration depth, 93.6 & x0025; for weld bead area)

    Model selection and hypothesis testing for large-scale network models with overlapping groups

    Full text link
    The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models, and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties) are often an essential ingredient of network formation.Comment: 20 pages,7 figures, 1 tabl
    corecore