18,621 research outputs found
Recommended from our members
Language acquisition and machine learning
In this paper, we review recent progress in the field of machine learning and examine its implications for computational models of language acquisition. As a framework for understanding this research, we propose four component tasks involved in learning from experience - aggregation, clustering, characterization, and storage. We then consider four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - describing each in terms of our framework. After this, we turn to the problem of grammar acquisition, relating this problem to other learning tasks and reviewing four AI systems that have addressed the problem. Finally, we note some limitations of the earlier work and propose an alternative approach to modeling the mechanisms underlying language acquisition
Eye-movements in implicit artificial grammar learning
Artificial grammar learning (AGL) has been probed with forced-choice behavioral tests (active tests). Recent attempts to probe the outcomes of learning (implicitly acquired knowledge) with eye-movement responses (passive tests) have shown null results. However, these latter studies have not tested for sensitivity effects, for example, increased eye movements on a printed violation. In this study, we tested for sensitivity effects in AGL tests with (Experiment 1) and without (Experiment 2) concurrent active tests (preference- and grammaticality classification) in an eye-tracking experiment. Eye movements discriminated between sequence types in passive tests and more so in active tests. The eye-movement profile did not differ between preference and grammaticality classification, and it resembled sensitivity effects commonly observed in natural syntax processing. Our findings show that the outcomes of implicit structured sequence learning can be characterized in eye tracking. More specifically, whole trial measures (dwell time, number of fixations) showed robust AGL effects, whereas first-pass measures (first-fixation duration) did not. Furthermore, our findings strengthen the link between artificial and natural syntax processing, and they shed light on the factors that determine performance differences in preference and grammaticality classification tests.Max Planck Institute for PsycholinguisticsDonders Institute for Brain, Cognition and BehaviorVetenskapsradetSwedish Dyslexia Foundatio
Prediction-Based Learning and Processing of Event Knowledge.
Knowledge of common events is central to many aspects of cognition. Intuitively, it seems as though events are linear chains of the activities of which they are comprised. In line with this intuition, a number of theories of the temporal structure of event knowledge have posited mental representations (data structures) consisting of linear chains of activities. Competing theories focus on the hierarchical nature of event knowledge, with representations comprising ordered scenes, and chains of activities within those scenes. We present evidence that the temporal structure of events typically is not well-defined, but it is much richer and more variable both within and across events than has usually been assumed. We also present evidence that prediction-based neural network models can learn these rich and variable event structures and produce behaviors that reflect human performance. We conclude that knowledge of the temporal structure of events in the human mind emerges as a consequence of prediction-based learning
Algorithmic complexity for psychology: A user-friendly implementation of the coding theorem method
Kolmogorov-Chaitin complexity has long been believed to be impossible to
approximate when it comes to short sequences (e.g. of length 5-50). However,
with the newly developed \emph{coding theorem method} the complexity of strings
of length 2-11 can now be numerically estimated. We present the theoretical
basis of algorithmic complexity for short strings (ACSS) and describe an
R-package providing functions based on ACSS that will cover psychologists'
needs and improve upon previous methods in three ways: (1) ACSS is now
available not only for binary strings, but for strings based on up to 9
different symbols, (2) ACSS no longer requires time-consuming computing, and
(3) a new approach based on ACSS gives access to an estimation of the
complexity of strings of any length. Finally, three illustrative examples show
how these tools can be applied to psychology.Comment: to appear in "Behavioral Research Methods", 14 pages in journal
format, R package at http://cran.r-project.org/web/packages/acss/index.htm
Functional characterization of two enhancers located downstream FOXP2
Background: Mutations in the coding region of FOXP2 are known to cause speech and language impairment. However, it is not clear how dysregulation of the gene contributes to language deficit. Interestingly, microdeletions of the region downstream the gene have been associated with cognitive deficits. Methods: Here, we investigate changes in FOXP2 expression in the SK-N-MC neuroblastoma human cell line after deletion by CRISPR-Cas9 of two enhancers located downstream of the gene. Results: Deletion of any of these two functional enhancers downregulates FOXP2, but also upregulates the closest 3′ gene MDFIC. Because this effect is not statistically significant in a HEK 293 cell line, derived from the human kidney, both enhancers might confer a tissue specific regulation to both genes. We have also found that the deletion of any of these enhancers downregulates six well-known FOXP2 target genes in the SK-N-MC cell line. Conclusions: We expect these findings contribute to a deeper understanding of how FOXP2 and MDFIC are regulated to pace neuronal development supporting cognition, speech and language.Spanish National Research and Development Plan PI14/01884Instituto de Salud Carlos III PI14/01884FEDER PI14/0188
- …