11 research outputs found

    The randomly driven Ising ferromagnet, Part II: One and two dimensions

    Full text link
    We consider the behavior of an Ising ferromagnet obeying the Glauber dynamics under the influence of a fast switching, random external field. In Part I, we introduced a general formalism for describing such systems and presented the mean field theory. In this article we derive results for the one dimensional case, which can be only partially solved. Monte Carlo simulations performed on a square lattice indicate that the main features of the mean field theory survive the presence of strong fluctuations.Comment: 10 pages in REVTeX/LaTeX format, 17 eps/ps figures. Submitted to Journal of Physics

    The randomly driven Ising ferromagnet, Part I: General formalism and mean field theory

    Full text link
    We consider the behavior of an Ising ferromagnet obeying the Glauber dynamics under the influence of a fast switching, random external field. After introducing a general formalism for describing such systems, we consider here the mean-field theory. A novel type of first order phase transition related to spontaneous symmetry breaking and dynamic freezing is found. The non-equilibrium stationary state has a complex structure, which changes as a function of parameters from a singular-continuous distribution with Euclidean or fractal support to an absolutely continuous one.Comment: 12 pages REVTeX/LaTeX format, 12 eps/ps figures. Submitted to Journal of Physics

    Stationary Properties of a Randomly Driven Ising Ferromagnet

    Full text link
    We consider the behavior of an Ising ferromagnet obeying the Glauber dynamics under the influence of a fast switching, random external field. Analytic results for the stationary state are presented in mean-field approximation, exhibiting a novel type of first order phase transition related to dynamic freezing. Monte Carlo simulations performed on a quadratic lattice indicate that many features of the mean field theory may survive the presence of fluctuations.Comment: 5 pages in RevTex format, 7 eps/ps figures, send comments to "mailto:[email protected]", submitted to PR

    A fast method for calculating the perceptron with maximal stability

    No full text
    For the class of linearly separable two class (boolean) functions the Perceptron with maximal stability defines in the space of all possible input configurations the direction along which the gap between the two classes is maximal. This solution has several advantages: it is unique, it is robust, and has the best generalization probability among all known linear discriminants. present here an active set approach to the dual problem, finding the minimal connector between two disjoint convex hulls. If NN is the number of the input units and MM is the number of examples, this algorithm runs in average O (MN2)O \,(M N^2) steps and requires the storage of a symmetric (N+3)×(N+3)(N + 3) \times (N + 3) matrix

    A Fast Method for Calculating the Perceptron with Maximal Stability

    No full text
    For the class of linearly separable two class (boolean) functions the Perceptron with maximal stability defines in the space of all possible input configurations the direction along which the distance between the two classes is minimal. This solution has several advantages: it is unique, it is robust, and has the best generalization probability among all known linear discriminants. I present here an active set approach to the dual problem, finding the minimal connector between two disjoint convex hulls. If N is the number of the input units and M is the number of examples, this algorithm runs in O(MN 2 ) steps and requires the storage of a symmetric (N + 3) \Theta (N + 3) matrix. 1 Introduction R. Rammal was a physicist with many faces. He was interested in problems, which he solved with whatever methods he found useful, analytical or numerical. For example, he was not afraid to learn from computer scientists how to compute effectively the ground state of two dimensional spin glas..

    Computing the Bayes Kernel Classifier

    No full text
    Introduction Support Vector Machines try to achieve good generalization by computing the maximum margin separating hyperplane in a high-dimensional feature space. This approach eectively combines two very good ideas. The rst idea is to map the space of input vectors into a very high-dimensional feature space in such a way that nonlinear decisions functions on the input space can be constructed by using only separating hyperplanes on the feature space. By making use of kernels, we can implicitly perform such mappings without explicitly using high-dimensional separating vectors(Boser et al., 1992). Since it is very likely that the training examples will be linearly separable in the high-dimensional feature space, this method oers an elegant alternative to network growth algorithms as in(Rujan and Marchand, 1989; Marchand et al., 1990) which try to construct nonlinear decision surfaces by combining perceptrons. The second idea is to construct the separating hyperplane on t
    corecore