62 research outputs found

    Adaptation-Based Programming in Haskell

    Full text link
    We present an embedded DSL to support adaptation-based programming (ABP) in Haskell. ABP is an abstract model for defining adaptive values, called adaptives, which adapt in response to some associated feedback. We show how our design choices in Haskell motivate higher-level combinators and constructs and help us derive more complicated compositional adaptives. We also show an important specialization of ABP is in support of reinforcement learning constructs, which optimize adaptive values based on a programmer-specified objective function. This permits ABP users to easily define adaptive values that express uncertainty anywhere in their programs. Over repeated executions, these adaptive values adjust to more efficient ones and enable the user's programs to self optimize. The design of our DSL depends significantly on the use of type classes. We will illustrate, along with presenting our DSL, how the use of type classes can support the gradual evolution of DSLs.Comment: In Proceedings DSL 2011, arXiv:1109.032

    Learning From Instruction and Experience: Methods for Incorporating Procedural Domain Theories Into Knowledge-Based Neural Networks

    No full text
    This thesis defines and evaluates two systems that allow a teacher to provide instructions to a machine learner. My systems, FSkbann and ratle, expand the language that a teacher may use to provide advice to the learner. In particular, my techniques allow a teacher to give partially correct instructions about procedural tasks -- tasks that are solved as sequences of steps. FSkbann and ratle allow a computer to learn both from instruction and from experience. Experiments with these systems on several testbeds demonstrate that they produce learners that successfully use and refine the instructions they are given. In my initial approach, FSkbann, the teacher provides instructions as a set of propositional rules organized around one or more finite-state automata (FSAs). FSkbann maps the knowledge in the rules and FSAs into a recurrent neural network. I used FSkbann to refine the Chou-Fasman algorithm, a method for solving the secondary-structure prediction problem, a difficult task in mol..

    Popular ensemble methods: an empirical study

    No full text
    An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods on 23 data sets using both neural networks and decision trees as our classification algorithm. Our results clearly indicate a number of conclusions. First, while Bagging is almost always more accurate than a single classifier, it is sometimes much less accurate than Boosting. On the other hand, Boosting can create ensembles that are less accurate than a single classifier – especially when using neural networks. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work suggests that most of the gain in an ensemble’s performance comes in the first few classifiers combined; however, relatively large gains can be seen up to 25 classifiers when Boosting decision trees. 1

    Incorporating Advice Into Agents That Learn From Reinforcements

    No full text
    Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present an approach that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple programming language. Based on techniques from knowledge-based neural networks, these programs are inserted directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that shows our approach leads to statistically-significant gains in expected reward. Importantly, the advice improves the expected reward regardless of the stage of training at which it is given. Introduction A successful and increasingly popular method for cr..

    An Empirical Evaluation of Bagging and Boosting

    No full text
    An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it ..

    Ensembles as a Sequence of Classifiers

    No full text
    An ensemble is a classifier created by combining the predictions of multiple component classifiers. We present a new method for combining classifiers into an ensemble based on a simple estimation of each classifier's competence. The classifiers are grouped into an ordered list where each classifier has a corresponding threshold. To classify an example, the first classifier on the list is consulted and if that classifier's confidence for predicting the example is above the classifier's threshold, then that classifier's prediction is used. Otherwise, the next classifier and its threshold is consulted and so on. If none of the classifiers predicts the example above its confidence threshold then the class of the example is predicted by averaging all of the component classifier predictions. The key to this method is the selection of the confidence threshold for each classifier. We have implemented this method in a system called Sequel which has been applied to the task o..
    • …
    corecore