2,877 research outputs found

    The current approaches in pattern recognition

    Get PDF

    An automatic learning of grammar for syntactic pattern recognition, 1988

    Get PDF
    The practical utility of a syntactic pattern recognizer depends on an automatic learning of pattern class grammars from a sample of patterns. The basic idea is to devise a learning process based on induction of repeated subs rings. Several techniques based on formal lattice structures, structural derivatives, information, k tails, lattice structures, structural information sequence, inductive inference and heuristic approach are widely found in the literature. The purpose of this research is to first devise a minimal finite state automaton which recognizes all patterns. The automaton is then manipulated so that the induction of repetition is captured by cycles or loops. The final phase consists of converting the reduced automaton into a context - free grammar. Now, an automatic parser for this grammar can recognize patterns which are in the respective class

    Using natural language for database queries

    Get PDF
    Not provided

    Parameter Learning of Logic Programs for Symbolic-Statistical Modeling

    Full text link
    We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm
    corecore