600 research outputs found

    Exploring the Neural Mechanisms of Physics Learning

    Get PDF
    This dissertation presents a series of neuroimaging investigations and achievements that strive to deepen and broaden our understanding of human problem solving and physics learning. Neuroscience conceives of dynamic relationships between behavior, experience, and brain structure and function, but how neural changes enable human learning across classroom instruction remains an open question. At the same time, physics is a challenging area of study in which introductory students regularly struggle to achieve success across university instruction. Research and initiatives in neuroeducation promise a new understanding into the interactions between biology and education, including the neural mechanisms of learning and development. These insights may be particularly useful in understanding how students learn, which is crucial for helping them succeed. Towards this end, we utilize methods in functional magnetic resonance imaging (fMRI), as informed by education theory, research, and practice, to investigate the neural mechanisms of problem solving and learning in students across semester-long University-level introductory physics learning environments. In the first study, we review and synthesize the neuroimaging problem solving literature and perform quantitative coordinate-based meta-analysis on 280 problem solving experiments to characterize the common and dissociable brain networks that underlie human problem solving across different representational contexts. Then, we describe the Understanding the Neural Mechanisms of Physics Learning project, which was designed to study functional brain changes associated with learning and problem solving in undergraduate physics students before and after a semester of introductory physics instruction. We present the development, facilitation, and data acquisition for this longitudinal data collection project. We then perform a sequence of fMRI analyses of these data and characterize the first-time observations of brain networks underlying physics problem solving in students after university physics instruction. We measure sustained and sequential brain activity and functional connectivity during physics problem solving, test brain-behavior relationships between accuracy, difficulty, strategy, and conceptualization of physics ideas, and describe differences in student physics-related brain function linked with dissociations in conceptual approach. The implications of these results to inform effective instructional practices are discussed. Then, we consider how classroom learning impacts the development of student brain function by examining changes in physics problem solving-related brain activity in students before and after they completed a semester-long Modeling Instruction physics course. Our results provide the first neurobiological evidence that physics learning environments drive the functional reorganization of large-scale brain networks in physics students. Through this collection of work, we demonstrate how neuroscience studies of learning can be grounded in educational theory and pedagogy, and provide deep insights into the neural mechanisms by which students learn physics

    Solving morphological analogies: from retrieval to generation

    Full text link
    Analogical inference is a remarkable capability of human reasoning, and has been used to solve hard reasoning tasks. Analogy based reasoning (AR) has gained increasing interest from the artificial intelligence community and has shown its potential in multiple machine learning tasks such as classification, decision making and recommendation with competitive results. We propose a deep learning (DL) framework to address and tackle two key tasks in AR: analogy detection and solving. The framework is thoroughly tested on the Siganalogies dataset of morphological analogical proportions (APs) between words, and shown to outperform symbolic approaches in many languages. Previous work have explored the behavior of the Analogy Neural Network for classification (ANNc) on analogy detection and of the Analogy Neural Network for retrieval (ANNr) on analogy solving by retrieval, as well as the potential of an autoencoder (AE) for analogy solving by generating the solution word. In this article we summarize these findings and we extend them by combining ANNr and the AE embedding model, and checking the performance of ANNc as an retrieval method. The combination of ANNr and AE outperforms the other approaches in almost all cases, and ANNc as a retrieval method achieves competitive or better performance than 3CosMul. We conclude with general guidelines on using our framework to tackle APs with DL.Comment: Preprint submitted to Springer special Issue in Annals of Mathematics and Artificial Intelligence: Mathematical Foundations of analogical reasoning and application

    A Neural Approach for Detecting Morphological Analogies

    Get PDF
    International audienceAnalogical proportions are statements of the form "A is to B as C is to D" that are used for several reasoning and classification tasks in artificial intelligence and natural language processing (NLP). For instance, there are analogy based approaches to semantics as well as to morphology. In fact, symbolic approaches were developed to solve or to detect analogies between character strings, e.g., the axiomatic approach as well as that based on Kolmogorov complexity. In this paper, we propose a deep learning approach to detect morphological analogies, for instance, with reinflexion or conjugation. We present empirical results that show that our framework is competitive with the above-mentioned state of the art symbolic approaches. We also explore empirically its transferability capacity across languages, which highlights interesting similarities between them

    Equilibrium statistical mechanics on correlated random graphs

    Full text link
    Biological and social networks have recently attracted enormous attention between physicists. Among several, two main aspects may be stressed: A non trivial topology of the graph describing the mutual interactions between agents exists and/or, typically, such interactions are essentially (weighted) imitative. Despite such aspects are widely accepted and empirically confirmed, the schemes currently exploited in order to generate the expected topology are based on a-priori assumptions and in most cases still implement constant intensities for links. Here we propose a simple shift in the definition of patterns in an Hopfield model to convert frustration into dilution: By varying the bias of the pattern distribution, the network topology -which is generated by the reciprocal affinities among agents - crosses various well known regimes (fully connected, linearly diverging connectivity, extreme dilution scenario, no network), coupled with small world properties, which, in this context, are emergent and no longer imposed a-priori. The model is investigated at first focusing on these topological properties of the emergent network, then its thermodynamics is analytically solved (at a replica symmetric level) by extending the double stochastic stability technique, and presented together with its fluctuation theory for a picture of criticality. At least at equilibrium, dilution simply decreases the strength of the coupling felt by the spins, but leaves the paramagnetic/ferromagnetic flavors unchanged. The main difference with respect to previous investigations and a naive picture is that within our approach replicas do not appear: instead of (multi)-overlaps as order parameters, we introduce a class of magnetizations on all the possible sub-graphs belonging to the main one investigated: As a consequence, for these objects a closure for a self-consistent relation is achieved.Comment: 30 pages, 4 figure

    Case-Based Translation: First Steps from a Knowledge-Light Approach Based on Analogy to a Knowledge-Intensive One

    Get PDF
    International audienceThis paper deals with case-based machine translation. It is based on a previous work using a proportional analogy on strings, i.e., a quaternary relation expressing that "String A is to string B as string C is to string D". The first contribution of this paper is the rewording of this work in terms of case-based reasoning: a case is a problem-solution pair (A, A) where A is a sentence in an origin language and A , its translation in the destination language. First, three cases (A, A), (B, B), (C, C) such that "A is to B as C is to the target problem D" are retrieved. Then, the analogical equation in the destination language "A is to B as C is to x" is solved and D = x is a suggested translation of D. Although it does not involve any linguistic knowledge, this approach was effective and gave competitive results at the time it was proposed. The second contribution of this work aims at examining how this prior knowledge-light case-based machine translation approach could be improved by using additional pieces of knowledge associated with cases, domain knowledge, retrieval knowledge, and adaptation knowledge, and other principles or techniques from case-based reasoning and natural language processing

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    Fluid intelligence emerges from representing relations

    Get PDF
    Based on recent findings in cognitive neuroscience and psychology as well as computational models of working memory and reasoning, I argue that fluid intelligence (fluid reasoning) can amount to representing in the mind the key relation(s) for the task at hand. Effective representation of relations allows for enormous flexibility of thinking but depends on the validity and robustness of the dynamic patterns of argument-object (role-filler) bindings, which encode relations in the brain. Such a reconceptualization of the fluid intelligence construct allows for the simplification and purification of its models, tests, and potential brain mechanisms

    Significance of neural noise

    Get PDF
    • …
    corecore