436,427 research outputs found

    Sparse Hopfield network reconstruction with â„“1\ell_{1} regularization

    Full text link
    We propose an efficient strategy to infer sparse Hopfield network based on magnetizations and pairwise correlations measured through Glauber samplings. This strategy incorporates the ℓ1\ell_{1} regularization into the Bethe approximation by a quadratic approximation to the log-likelihood, and is able to further reduce the inference error of the Bethe approximation without the regularization. The optimal regularization parameter is observed to be of the order of M−νM^{-\nu} where MM is the number of independent samples. The value of the scaling exponent depends on the performance measure. ν≃0.5001\nu\simeq0.5001 for root mean squared error measure while ν≃0.2743\nu\simeq0.2743 for misclassification rate measure. The efficiency of this strategy is demonstrated for the sparse Hopfield model, but the method is generally applicable to other diluted mean field models. In particular, it is simple in implementation without heavy computational cost.Comment: 9 pages, 3 figures, Eur. Phys. J. B (in press

    Mean-field neural networks: learning mappings on Wasserstein space

    Full text link
    We study the machine learning task for models with operators mapping between the Wasserstein space of probability measures and a space of functions, like e.g. in mean-field games/control problems. Two classes of neural networks, based on bin density and on cylindrical approximation, are proposed to learn these so-called mean-field functions, and are theoretically supported by universal approximation theorems. We perform several numerical experiments for training these two mean-field neural networks, and show their accuracy and efficiency in the generalization error with various test distributions. Finally, we present different algorithms relying on mean-field neural networks for solving time-dependent mean-field problems, and illustrate our results with numerical tests for the example of a semi-linear partial differential equation in the Wasserstein space of probability measures.Comment: 25 pages, 14 figure

    Pion Superfluidity and Meson Properties at Finite Isospin Density

    Full text link
    We investigate pion superfluidity and its effect on meson properties and equation of state at finite temperature and isospin and baryon densities in the frame of standard flavor SU(2) NJL model. In mean field approximation to quarks and random phase approximation to mesons, the critical isospin chemical potential for pion superfluidity is exactly the pion mass in the vacuum, and corresponding to the isospin symmetry spontaneous breaking, there is in the pion superfluidity phase a Goldstone mode which is the linear combination of the normal sigma and charged pion modes. We calculate numerically the gap equations for the chiral and pion condensates, the phase diagrams, the meson spectra, and the equation of state, and compare them with that obtained in other effective models. The competitions between pion superfluidity and color superconductivity at finite baryon density and between pion and kaon superfluidity at finite strangeness density in flavor SU(3) NJL model are briefly discussed.Comment: Updated version: (1)typos corrected; (2)an algebra error in Eq.(87) corrected; (3)Fig.(17) renewed according to Eq.(87). We thank Prof.Masayuki Matsuzaki for pointing out the error in Eq.(87

    A Digital Neuromorphic Realization of the 2-D Wilson Neuron Model

    Get PDF
    This brief presents a piecewise linear approximation of the nonlinear Wilson (NW) neuron model for the realization of an efficient digital circuit implementation. The accuracy of the proposed piecewise Wilson (PW) model is examined by calculating time domain signal shaping errors. Furthermore, bifurcation analyses demonstrate that the approximation follows the same bifurcation pattern as the NW model. As a proof of concept, both models are hardware synthesized and implemented on field programmable gate arrays, demonstrating that the PW model has a range of neuronal behaviors similar to the NW model with considerably higher computational performance and a lower hardware overhead. This approach can be used in hardware-based large scale biological neural network simulations and behavioral studies. The mean normalized root mean square error and maximum absolute error of the PW model are 6.32% and 0.31%, respectively, as compared to the NW model
    • …
    corecore