452 research outputs found

    The Random-Diluted Triangular Plaquette Model: study of phase transitions in a Kinetically Constrained Model

    Full text link
    We study how the thermodynamic properties of the Triangular Plaquette Model (TPM) are influenced by the addition of extra interactions. The thermodynamics of the original TPM is trivial, while its dynamics is glassy, as usual in Kinetically Constrained Models. As soon as we generalize the model to include additional interactions, a thermodynamic phase transition appears in the system. The additional interactions we consider are either short ranged, forming a regular lattice in the plane, or long ranged of the small-world kind. In the case of long-range interactions we call the new model Random-Diluted TPM. We provide arguments that the model so modified should undergo a thermodynamic phase transition, and that in the long-range case this is a glass transition of the "Random First-Order" kind. Finally, we give support to our conjectures studying the finite temperature phase diagram of the Random-Diluted TPM in the Bethe approximation. This corresponds to the exact calculation on the random regular graph, where free-energy and configurational entropy can be computed by means of the cavity equations.Comment: 20 pages, 7 figures; final version to appear on PR

    Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm

    Full text link
    How many training data are needed to learn a supervised task? It is often observed that the generalization error decreases as n−βn^{-\beta} where nn is the number of training examples and β\beta an exponent that depends on both data and algorithm. In this work we measure β\beta when applying kernel methods to real datasets. For MNIST we find β≈0.4\beta\approx 0.4 and for CIFAR10 β≈0.1\beta\approx 0.1, for both regression and classification tasks, and for Gaussian or Laplace kernels. To rationalize the existence of non-trivial exponents that can be independent of the specific kernel used, we study the Teacher-Student framework for kernels. In this scheme, a Teacher generates data according to a Gaussian random field, and a Student learns them via kernel regression. With a simplifying assumption -- namely that the data are sampled from a regular lattice -- we derive analytically β\beta for translation invariant kernels, using previous results from the kriging literature. Provided that the Student is not too sensitive to high frequencies, β\beta depends only on the smoothness and dimension of the training data. We confirm numerically that these predictions hold when the training points are sampled at random on a hypersphere. Overall, the test error is found to be controlled by the magnitude of the projection of the true function on the kernel eigenvectors whose rank is larger than nn. Using this idea we predict relate the exponent β\beta to an exponent aa describing how the coefficients of the true function in the eigenbasis of the kernel decay with rank. We extract aa from real data by performing kernel PCA, leading to β≈0.36\beta\approx0.36 for MNIST and β≈0.07\beta\approx0.07 for CIFAR10, in good agreement with observations. We argue that these rather large exponents are possible due to the small effective dimension of the data.Comment: We added (i) the prediction of the exponent β\beta for real data using kernel PCA; (ii) the generalization of our results to non-Gaussian data from reference [11] (Bordelon et al., "Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks"

    An application of group theory to matrices and to ordinary differential equations

    Get PDF
    AbstractA result concerned with groups is proved, from which several applications can be derived. We estimate e.g. the number of distinct eigenvalues of the Kronecker product and sum of two given matrices A, B, when A as well as B has distinct eigenvalues. We also discuss the order of the linear ODE whose solutions are the products of solutions of two given linear ODEs, when such ODEs are in certain classes

    Canonical forms and discrete Liouville–Green asymptotics for second-order linear difference equations

    Get PDF
    Abstract Liouville–Green (WKB) asymptotic approximations are constructed for some classes of linear second-order difference equations. This is done starting from certain "canonical forms" for the three-term linear recurrence. Rigorous explicit bounds are established for the error terms in the asymptotic approximations of recessive as well as dominant solutions. The asymptotics with respect to parameters affecting the equation is also discussed. Several illustrative examples are given

    Familiarization: A theory of repetition suppression predicts interference between overlapping cortical representations

    Get PDF
    Repetition suppression refers to a reduction in the cortical response to a novel stimulus that results from repeated presentation of the stimulus. We demonstrate repetition suppression in a well established computational model of cortical plasticity, according to which the relative strengths of lateral inhibitory interactions are modified by Hebbian learning. We present the model as an extension to the traditional account of repetition suppression offered by sharpening theory, which emphasises the contribution of afferent plasticity, by instead attributing the effect primarily to plasticity of intra-cortical circuitry. In support, repetition suppression is shown to emerge in simulations with plasticity enabled only in intra-cortical connections. We show in simulation how an extended ‘inhibitory sharpening theory’ can explain the disruption of repetition suppression reported in studies that include an intermediate phase of exposure to additional novel stimuli composed of features similar to those of the original stimulus. The model suggests a re-interpretation of repetition suppression as a manifestation of the process by which an initially distributed representation of a novel object becomes a more localist representation. Thus, inhibitory sharpening may constitute a more general process by which representation emerges from cortical re-organisation

    Semi-implicit discretization of abstract evolution equations

    Get PDF

    Geometric effects in the design of catalytic converters in car exhaust pipes

    Get PDF
    Abstract Introduction We solve the gas dynamics (Euler) equations, augmented by adding a fourth equation governing the fraction of unburnt gas, in a number of cylindrically symmetric configurations of the pipe system. Case description The purpose is to test several duct profiles to see which one favors a higher reduction of the residual noxious gases, at the end of a car's exhaust pipe. Discussion and evaluation It is found that this purely geometric factor does play a role in the environment's purification accomplished by the catalytic converter. This is possibly due to the longer time spent by the noxious gases resident inside the device when this has certain profiles, though at the price of a little higher temperature attained. Conclusions It seems that geometric factors play a role in reducing cars' noxious gases by means of catalytic converters. A more precise analysis should be formulated as a mathematical inverse problem

    Abstract Versions of L′Hôpital′s Rule for Holomorphic Functions in the Framework of Complex B-Modules

    Get PDF
    AbstractAbstract versions of L′Hôpital′s rule are proved for the "ratio" f(z)(g(z))−1, where f : S → X, g : S → A are vector-valued holomorphic functions defined in a region of the complex plane containing S, A being a complex unilal Banach algebra, and X a complex Banach module over A. Both cases, (i) (g(z))−1[formula] 0, and (ii) f(z) [formula] 0, g(z) [formula] 0, as z[formula] α, α being either finite or infinite, are considered when f′(z)(g′(z))−1 has a finite limit. Applications are given to the asymptotics of linear second-order differential equations in Banach algebras
    • …
    corecore