132 research outputs found

    Analytical learning and term-rewriting systems

    Get PDF
    Analytical learning is a set of machine learning techniques for revising the representation of a theory based on a small set of examples of that theory. When the representation of the theory is correct and complete but perhaps inefficient, an important objective of such analysis is to improve the computational efficiency of the representation. Several algorithms with this purpose have been suggested, most of which are closely tied to a first order logical language and are variants of goal regression, such as the familiar explanation based generalization (EBG) procedure. But because predicate calculus is a poor representation for some domains, these learning algorithms are extended to apply to other computational models. It is shown that the goal regression technique applies to a large family of programming languages, all based on a kind of term rewriting system. Included in this family are three language families of importance to artificial intelligence: logic programming, such as Prolog; lambda calculus, such as LISP; and combinatorial based languages, such as FP. A new analytical learning algorithm, AL-2, is exhibited that learns from success but is otherwise quite different from EBG. These results suggest that term rewriting systems are a good framework for analytical learning research in general, and that further research should be directed toward developing new techniques

    Aspects of the constructive omega rule within automated deduction

    Get PDF
    In general, cut elimination holds for arithmetical systems with the w -rule, but not for systems with ordinary induction. Hence in the latter, there is the problem of generalisation, since arbitrary formulae can be cut in. This makes automatic theorem -proving very difficult. An important technique for investigating derivability in formal systems of arithmetic has been to embed such systems into semi- formal systems with the w -rule. This thesis describes the implementation of such a system. Moreover, an important application is presented in the form of a new method of generalisation by means of "guiding proofs" in the stronger system, which sometimes succeeds in producing proofs in the original system when other methods fail

    A stochastic algorithm for probabilistic independent component analysis

    Full text link
    The decomposition of a sample of images on a relevant subspace is a recurrent problem in many different fields from Computer Vision to medical image analysis. We propose in this paper a new learning principle and implementation of the generative decomposition model generally known as noisy ICA (for independent component analysis) based on the SAEM algorithm, which is a versatile stochastic approximation of the standard EM algorithm. We demonstrate the applicability of the method on a large range of decomposition models and illustrate the developments with experimental results on various data sets.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS499 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Super-rough phase of the random-phase sine-Gordon model: Two-loop results

    Full text link
    We consider the two-dimensional random-phase sine-Gordon and study the vicinity of its glass transition temperature TcT_c, in an expansion in small τ=(TcT)/Tc\tau=(T_c-T)/T_c, where TT denotes the temperature. We derive renormalization group equations in cubic order in the anharmonicity, and show that they contain two universal invariants. Using them we obtain that the correlation function in the super-rough phase for temperature T<TcT<T_c behaves at large distances as ˉ=Aln2(x/a)+O[ln(x/a)]\bar{} = \mathcal{A}\ln^2(|x|/a) + \mathcal{O}[\ln(|x|/a)], where the amplitude A\mathcal{A} is a universal function of temperature A=2τ22τ3+O(τ4)\mathcal{A}=2\tau^2-2\tau^3+\mathcal{O}(\tau^4). This result differs at two-loop order, i.e., O(τ3)\mathcal{O}(\tau^3), from the prediction based on results from the "nearly conformal" field theory of a related fermion model. We also obtain the correction-to-scaling exponent.Comment: 34 page

    Efficient dynamic optimization of logic programs

    Get PDF
    A summary is given of the dynamic optimization approach to speed up learning for logic programs. The problem is to restructure a recursive program into an equivalent program whose expected performance is optimal for an unknown but fixed population of problem instances. We define the term 'optimal' relative to the source of input instances and sketch an algorithm that can come within a logarithmic factor of optimal with high probability. Finally, we show that finding high-utility unfolding operations (such as EBG) can be reduced to clause reordering

    Learning control knowledge within an explanation-based learning framework

    Get PDF

    Reflexive standardization and standardized reflexivity

    Get PDF

    Reflexive standardization and standardized reflexivity

    Get PDF
    corecore