14 research outputs found

    Regular Language Induction with Grammar-based Classifier System

    Get PDF
    Non

    Unsupervised Statistical Learning of Context-free Grammar

    Get PDF
    In this paper, we address the problem of inducing (weighted) context-free grammar (WCFG) on data given. The induction is performed by using a new model of grammatical inference, i.e., weighted Grammar-based Classifier System (wGCS). wGCS derives from learning classifier systems and searches grammar structure using a genetic algorithm and covering. Weights of rules are estimated by using a novelty Inside-Outside Contrastive Estimation algorithm. The proposed method employs direct negative evidence and learns WCFG both form positive and negative samples. Results of experiments on three synthetic context-free languages show that wGCS is competitive with other statistical-based method for unsupervised CFG learning

    Anticipatory Classifier System with Average Reward Criterion in Discretized Multi-Step Environments

    No full text
    Initially, Anticipatory Classifier Systems (ACS) were designed to address both single and multistep decision problems. In the latter case, the objective was to maximize the total discounted rewards, usually based on Q-learning algorithms. Studies on other Learning Classifier Systems (LCS) revealed many real-world sequential decision problems where the preferred objective is the maximization of the average of successive rewards. This paper proposes a relevant modification toward the learning component, allowing us to address such problems. The modified system is called AACS2 (Averaged ACS2) and is tested on three multistep benchmark problems

    Improved (Non)fixed TSS methods for promoter prediction

    No full text
    Abstract—Recognizing bacterial promoters is an important step towards understanding gene regulation. In this paper, we address the problem of predicting the location of promoters and their transcription start sites (TSSs) in Escherichia coli. Our approaches to TSS prediction are based upon fixed and none fixed TSS algorithms. Introduced improvements significantly bumped up the efficiency of the algorithms. I

    Parsing expression grammars and their induction algorithm

    No full text
    Grammatical inference (GI), i.e., the task of finding a rule that lies behind given words, can be used in the analyses of amyloidogenic sequence fragments, which are essential in studies of neurodegenerative diseases. In this paper, we developed a new method that generates non-circular parsing expression grammars (PEGs) and compares it with other GI algorithms on the sequences from a real dataset. The main contribution of this paper is a genetic programming-based algorithm for the induction of parsing expression grammars from a finite sample. The induction method has been tested on a real bioinformatics dataset and its classification performance has been compared to the achievements of existing grammatical inference methods. The evaluation of the generated PEG on an amyloidogenic dataset revealed its accuracy when predicting amyloid segments. We show that the new grammatical inference algorithm achieves the best ACC (Accuracy), AUC (Area under ROC curve), and MCC (Mathew’s correlation coefficient) scores in comparison to five other automata or grammar learning methods

    How to measure the topological quality of protein grammars?

    Get PDF
    International audienceMotivation. Context-free (CF) and context-sensitive (CS) formal grammars are often regarded as more appropriate to model proteins than regular level models such as finite state automata and Hidden Markov Models (HMM). In theory, the claim is well founded in the fact that many biologically relevant interactions between residues of protein sequences have a character of nested or crossed dependencies. In practice, there is hardly any evidence that grammars of higher expressiveness have an edge over old good HMMs in typical applications including recognition and classification of protein sequences. This is in contrast to RNA modeling, where CFG power some of the most successful tools. There have been proposed several explanations of this phenomenon. On the biology side, one difficulty is that interactions in proteins are often less specific and more " collective " in comparison to RNA. On the modeling side, a difficulty is the larger alphabet which combined with high complexity of CF and CS grammars imposes considerable trade-offs consisting on information reduction or learning sub-optimal solutions. Indeed, some studies hinted that CF level of expressiveness brought an added value in protein modeling when CF and regular grammars where implemented in the same framework (Dyrka, 2007; Dyrka et al., 2013). However, there have been no systematic study of explanatory power provided by various grammatical models. The first step to this goal is define objective criteria of such evaluation. Intuitively, a decent explanatory grammar should generate topology, or the parse tree, consistent with topology of the protein, or its secondary and/or tertiary structure. In this piece of research we build on this intuition and propose a set of measures to compare topology of the parse tree of a grammar with topology of the protein structure
    corecore