1,863 research outputs found

    An application of machine learning to the organization of institutional software repositories

    Get PDF
    Software reuse has become a major goal in the development of space systems, as a recent NASA-wide workshop on the subject made clear. The Data Systems Technology Division of Goddard Space Flight Center has been working on tools and techniques for promoting reuse, in particular in the development of satellite ground support software. One of these tools is the Experiment in Libraries via Incremental Schemata and Cobweb (ElvisC). ElvisC applies machine learning to the problem of organizing a reusable software component library for efficient and reliable retrieval. In this paper we describe the background factors that have motivated this work, present the design of the system, and evaluate the results of its application

    Learning feed-forward one-shot learners

    Full text link
    One-shot learning is usually tackled by using generative models or discriminative embeddings. Discriminative methods based on deep learning, which are very effective in other learning scenarios, are ill-suited for one-shot learning as they need large amounts of training data. In this paper, we propose a method to learn the parameters of a deep model in one shot. We construct the learner as a second deep network, called a learnet, which predicts the parameters of a pupil network from a single exemplar. In this manner we obtain an efficient feed-forward one-shot learner, trained end-to-end by minimizing a one-shot classification objective in a learning to learn formulation. In order to make the construction feasible, we propose a number of factorizations of the parameters of the pupil network. We demonstrate encouraging results by learning characters from single exemplars in Omniglot, and by tracking visual objects from a single initial exemplar in the Visual Object Tracking benchmark.Comment: The first three authors contributed equally, and are listed in alphabetical orde

    Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop

    Full text link
    The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques specifically developed for analyzing and understanding the inner-workings and representations acquired by neural models of language. Approaches included: systematic manipulation of input to neural networks and investigating the impact on their performance, testing whether interpretable knowledge can be decoded from intermediate representations acquired by neural networks, proposing modifications to neural network architectures to make their knowledge state or generated output more explainable, and examining the performance of networks on simplified or formal languages. Here we review a number of representative studies in each category

    How training set and prior knowledge affect preschoolers\u27 perception of quantity and early number learning

    Get PDF
    This dissertation examines how training on the iPad can improve children’s quantity recognition, and whether different types of training might be warranted for children with different levels of experience. Study 1 tested the effects of multiple exemplar training (3 cars / 3 apples / 3 ducks, etc.) versus single exemplar training (3 cars / 3 cars / 3 cars, etc.) in recognizing quantities. For children just learning to recognize quantities (0-2 knowers), training with multiple exemplars was most effective for quantities three and four. For 3-6 knower children, single exemplar training was most effective for learning quantities five and six. Study 2 tested the effects of using a training set with perceptually distinct dice-like arrangements versus linear arrangements of objects in the quantity recognition task. 0-2 knower children tended to choose the familiar arrangements which were shown in the training session (regardless of quantity), while 3-6 knowers could pick out the correct quantity regardless of arrangement. This result suggests that selecting the right type of training is important for facilitating children’s early number learning

    Future Directions in Machine Learning

    Get PDF

    The role of instructions and intention in learning

    Get PDF
    This thesis investigates how manipulating intention to learn (learning orientation) through verbal instructions affects learning in a range of putatively associative and implicit tasks. Within three different paradigms, learning orientation was manipulated so that learning was either incidental to, or aligned with (i.e. intentional) the aims of the task. The first series of experiments investigated sequence learning, as measured in the serial reaction time task. Sequence learning was found to result reliably under incidental conditions and was selectively improved by instructions promoting discovery of a relational rule describing a set of probabilistic contingencies. The second series of experiments used the prototype distortion task, where it has been claimed that implicit learning of a category of prototype-centered stimuli can occur automatically as a result of exposure. Using a visual search task as a means of incidental exposure, equivocal evidence for the implicit status of learning in the prototype distortion task was found, and instructions directing participants to memorize the stimuli resulted in greater evidence of learning the similarity structure of the category. Finally, the third series of experiments assessed generalization along stimulus dimensions following a difficult discrimination task. Instructions directing attention to a particular stimulus dimension promoted rule-based generalization and facilitated a dissociation in the pattern of generalization obtained as a result of reducing rule applicability on test. The results suggest that human learning is highly susceptible to learning orientation, which has implications for the way implicit learning should be viewed as a psychological construct. Theories of learning, whether single- or dual-process, need to better account for this seemingly pervasive role of learning orientation
    • …
    corecore