7,402 research outputs found

    Loop-corrected belief propagation for lattice spin models

    Full text link
    Belief propagation (BP) is a message-passing method for solving probabilistic graphical models. It is very successful in treating disordered models (such as spin glasses) on random graphs. On the other hand, finite-dimensional lattice models have an abundant number of short loops, and the BP method is still far from being satisfactory in treating the complicated loop-induced correlations in these systems. Here we propose a loop-corrected BP method to take into account the effect of short loops in lattice spin models. We demonstrate, through an application to the square-lattice Ising model, that loop-corrected BP improves over the naive BP method significantly. We also implement loop-corrected BP at the coarse-grained region graph level to further boost its performance.Comment: 11 pages, minor changes with new references added. Final version as published in EPJ

    BayesNAS: A Bayesian Approach for Neural Architecture Search

    Get PDF
    One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an over-parameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this paper, we employ the classic Bayesian learning approach to alleviate these two issues by modeling architecture parameters using hierarchical automatic relevance determination (HARD) priors. Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture. Impressively, this enabled us to find the architecture on CIFAR-10 within only 0.2 GPU days using a single GPU. Competitive performance can be also achieved by transferring to ImageNet. As a byproduct, our approach can be applied directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration.Comment: International Conference on Machine Learning 201

    (Bis{2-[3-(2,4,6-trimethyl­benz­yl)imid­azolin-2-yliden-1-yl-κC 2]-4-methyl­phenyl}amido-κN)chloridopalladium(II)

    Get PDF
    The coordination geometry about the Pd centre in the title compound, [Pd(C40H42N5)Cl], is approximately square-planar. The CNC pincer-type N-heterocyclic carbene ligand binds to the Pd atom in a tridentate fashion by the amido N atom and the two carbene atoms and generates two six-membered chelate rings, completing the coordination
    • …
    corecore