985,434 research outputs found

    Generalization Properties and Implicit Regularization for Multiple Passes SGM

    Get PDF
    We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data. In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings.Comment: 26 pages, 4 figures. To appear in ICML 201

    Gradual Generalization of Nautical Chart Contours with a Cube B-Spline Snake Model

    Get PDF
    —B-spline snake methods have been used in cartographic generalization in the past decade, particularly in the generalization of navigational charts where this method yields good results with respect to the shoal-bias rules for generalization of chart contours. However, previous studies only show generalization results at particular generalization (or scale) levels, and the user can only see two conditions: before the generalization and after generalization, but nothing in between. This paper presents an improved method of using B-spline snakes for generalization in the context of nautical charts, where the generalization process is done gradually, and the user can see the complete process of the generalization

    Robustness and Generalization

    Full text link
    We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from the complexity or stability arguments, to study generalization of learning algorithms. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property for learning algorithms to work

    E-Generalization Using Grammars

    Full text link
    We extend the notion of anti-unification to cover equational theories and present a method based on regular tree grammars to compute a finite representation of E-generalization sets. We present a framework to combine Inductive Logic Programming and E-generalization that includes an extension of Plotkin's lgg theorem to the equational case. We demonstrate the potential power of E-generalization by three example applications: computation of suggestions for auxiliary lemmas in equational inductive proofs, computation of construction laws for given term sequences, and learning of screen editor command sequences.Comment: 49 pages, 16 figures, author address given in header is meanwhile outdated, full version of an article in the "Artificial Intelligence Journal", appeared as technical report in 2003. An open-source C implementation and some examples are found at the Ancillary file

    Projective simulation with generalization

    Full text link
    The ability to generalize is an important feature of any intelligent agent. Not only because it may allow the agent to cope with large amounts of data, but also because in some environments, an agent with no generalization capabilities cannot learn. In this work we outline several criteria for generalization, and present a dynamic and autonomous machinery that enables projective simulation agents to meaningfully generalize. Projective simulation, a novel, physical approach to artificial intelligence, was recently shown to perform well in standard reinforcement learning problems, with applications in advanced robotics as well as quantum experiments. Both the basic projective simulation model and the presented generalization machinery are based on very simple principles. This allows us to provide a full analytical analysis of the agent's performance and to illustrate the benefit the agent gains by generalizing. Specifically, we show that already in basic (but extreme) environments, learning without generalization may be impossible, and demonstrate how the presented generalization machinery enables the projective simulation agent to learn.Comment: 14 pages, 9 figure

    Generalization of Gutzwiller Approximation

    Full text link
    We derive expressions required in generalizing the Gutzwiller approximation to models comprising arbitrarily degenerate localized orbitals.Comment: 6 pages, 1 figure, to appear in J.Phys.Soc.Jpn. vol.6
    corecore