3,939 research outputs found

    Model Selection for Gaussian Mixture Models for Uncertainty Qualification

    Get PDF
    Clustering is task of assigning the objects into different groups so that the objects are more similar to each other than in other groups. Gaussian Mixture model with Expectation Maximization method is the one of the most general ways to do clustering on large data set. However, this method needs the number of Gaussian mode as input(a cluster) so it could approximate the original data set. Developing a method to automatically determine the number of single distribution model will help to apply this method to more larger context. In the original algorithm, there is a variable represent the weight of each cluster. The weight means how the cluster will affect the data set, more precisely, each data point. So the idea is, we first set the number of the clusters to be a big number, then we are going to apply a penalized likelihood method to update the weights, while we are updating other parameters. The cluster will be deleted if its weight is less than a certain number we set. After all the iteration, the number of clusters will be generated, as well as other parameters of Gaussian model. The results from the simulation(MATLAB) shows that the number of the clusters could be generated from the modified method, and the final result of the clustering perform well to demonstrate the original data set. Although the modified algorithm could be used automatically do the whole clustering process, it need further investigation about its accuracy and improve its speed

    Experimental and numerical observation of dark and bright breathers in the band gap of a diatomic electrical lattice

    Get PDF
    We observe dark and bright intrinsic localized modes (ILMs), also known as discrete breathers, experimentally and numerically in a diatomic-like electrical lattice. The experimental generation of dark ILMs by driving a dissipative lattice with spatially homogenous amplitude is, to our knowledge, unprecedented. In addition, the experimental manifestation of bright breathers within the band gap is also novel in this system. In experimental measurements the dark modes appear just below the bottom of the top branch in frequency. As the frequency is then lowered further into the band gap, the dark ILMs persist, until the nonlinear localization pattern reverses and bright ILMs appear on top of the finite background. Deep into the band gap, only a single bright structure survives in a lattice of 32 nodes. The vicinity of the bottom band also features bright and dark self-localized excitations. These results pave the way for a more systematic study of dark breathers and their bifurcations in diatomic-like chains.VI Plan Propio of the University of Seville, Spain (VI PPITUS)AEI/FEDER, UE MAT2016- 79866-

    Distributed Training Large-Scale Deep Architectures

    Full text link
    Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Via lessons learned from our routine benchmarking effort, we first identify bottlenecks and overheads that hinter data parallelism. We then devise guidelines that help practitioners to configure an effective system and fine-tune parameters to achieve desired speedup. Specifically, we develop a procedure for setting minibatch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training
    corecore