523 research outputs found
Recommended from our members
Mental Imagery and Chunks: Empirical and Computational Findings
To investigate experts’ imagery in chess, players were required to recall briefly-presented positions in which the pieces were placed on the intersections between squares (intersection positions). Position types ranged from game positions to positions where both the piece distribution and location were randomized. Simulations were run with the CHREST model (Gobet & Simon, 2000). The simulations assumed that pieces had to be centered back one by one to the middle of the squares in the mind’s eye before chunks could be recognized. Consistent with CHREST’s predictions, chess players (N = 36), ranging from weak amateurs to grandmasters, exhibited much poorer recall on intersection positions than on standard positions (pieces placed on centers of squares). On the intersection positions, the skill difference in recall was larger on game positions than on the randomized positions. Participants recalled bishops better than knights, suggesting that Stroop-like interference impairs recall of the latter. The data supported both the time parameter in CHREST for shifting pieces in the mind’s eye (125 ms per piece) and the seriality assumption. In general, the study reinforces the plausibility of CHREST as a model of cognition
Modelling the acquisition of syntactic categories
This research represents an attempt to model the child’s acquisition of syntactic categories. A computational model, based on the EPAM theory of perception and learning, is developed. The basic assumptions are that (1) syntactic categories are actively constructed by the child using distributional learning abilities; and (2) cognitive constraints in learning rate and memory capacity limit these learning abilities. We present simulations of the syntax acquisition of a single subject, where the model learns to build up multi-word utterances by scanning a sample of the speech addressed to the subject by his mother
Recommended from our members
Computational Modelling of Mental Imagery in Chess: A Sensitivity Analysis
An important aim of cognitive science is to build computational models that account for a large number of phenomena but have few free parameters, and to obtain more veridical values for the models’ parameters by successive approximations. A good example of this approach is the CHREST model (Gobet & Simon, 2000), which has simulated numerous phenomena on chess expertise and in other domains. In this paper, we are interested in the parameter the model uses for shifting chess pieces in its mind’s eye (125 ms per piece), a parameter that had been estimated based on relatively sparse experimental evidence. Recently, Waters and Gobet (2008) tested the validity of this parameter in a memory experiment that required players to recall briefly presented positions in which the pieces were placed on the intersections between squares. Position types ranged from game positions to positions where both the piece distribution and location were randomised. CHREST, which assumed that pieces must be centred back to the middle of the squares in the mind’s eye before chunks can be recognized, simulated the data fairly well using the default parameter for shifting pieces. The sensitivity analysis presented in the current paper shows that the fit was nearly optimal for all groups of players except the grandmaster group for which, counterintuitively, a slower shifting time gave a better fit. The implications for theory development are discussed
Stochastic methods for solving high-dimensional partial differential equations
We propose algorithms for solving high-dimensional Partial Differential
Equations (PDEs) that combine a probabilistic interpretation of PDEs, through
Feynman-Kac representation, with sparse interpolation. Monte-Carlo methods and
time-integration schemes are used to estimate pointwise evaluations of the
solution of a PDE. We use a sequential control variates algorithm, where
control variates are constructed based on successive approximations of the
solution of the PDE. Two different algorithms are proposed, combining in
different ways the sequential control variates algorithm and adaptive sparse
interpolation. Numerical examples will illustrate the behavior of these
algorithms
Recommended from our members
Modeling children’s case marking errors with MOSAIC
We present a computational model of early grammatical development which simulates case-marking errors in children’s early multi-word speech as a function of the interaction between a performance-limited distributional analyser and the statistical properties of the input. The model is presented with a corpus of maternal speech from which it constructs a network consisting of nodes which represent words or sequences of words present in the input. It is sensitive to the distributional properties of items occurring in the input and is able to create ‘generative’ links between words which occur frequently in similar contexts, building pseudo-categories. The only information received by the model is that present in the input corpus. After training, the model is able to produce child-like utterances, including case-marking errors, of which a proportion are rote-learned, but the majority are not present in the maternal corpus. The latter are generated by traversing the generative links formed between items in the network
Modelling children's negation errors using probabilistic learning in MOSAIC.
Cognitive models of language development have often been used to simulate the pattern of errors in children’s speech. One relatively infrequent error in English involves placing inflection to the right of a negative, rather than to the left. The pattern of negation errors in English is explained by Harris & Wexler (1996) in terms of very early knowledge of inflection on the part of the child. We present data from three children which demonstrates that although negation errors are rare, error types predicted not to occur by Harris & Wexler do occur, as well as error types that are predicted to occur. Data from MOSAIC, a model of language acquisition, is also presented. MOSAIC is able to simulate the pattern of negation errors in children’s speech. The phenomenon is modelled more accurately when a probabilistic learning algorithm is used
Recommended from our members
Meter based omission of function words in MOSAIC
MOSAIC (Model of Syntax Acquisition in Children) is augmented with a new mechanism that allows for the omission of unstressed function words based on the prosodic structure of the utterance in which they occur. The mechanism allows MOSAIC to omit elements from multiple locations in a target utterance, which it was previously unable to do. It is shown that, although the new mechanism results in Optional Infinitive errors when run on children’s input, it is insufficient to simulate the high rate OI errors in children’s speech unless combined with MOSAIC’s edge-first learning mechanism. It is also shown that the addition of the new mechanism does not adversely affect MOSAIC’s fit to the Optional Infinitive phenomenon. The mechanism does, however, make MOSAIC’s output more child-like, both in terms of the range of utterances it can simulate, and the level and type of determiner omission that the model displays
Simulating the temporal reference of Dutch and English Root Infinitives.
Hoekstra & Hyams (1998) claim that the overwhelming majority of Dutch children’s Root Infinitives (RIs) are used to refer to modal (not realised) events, whereas in English speaking children, the temporal reference of RIs is free. Hoekstra & Hyams attribute this difference to qualitative differences in how temporal reference is carried by the Dutch infinitive and the English bare form. Ingram & Thompson (1996) advocate an input-driven account of this difference and suggest that the modal reading of German (and Dutch) RIs is caused by the fact that infinitive forms are predominantly used in modal contexts. This paper investigates whether an input-driven account can explain the differential reading of RIs in Dutch and English. To this end, corpora of English and Dutch Child Directed Speech were fed through MOSAIC, a computational model that has already been used to simulate the basic Optional Infinitive phenomenon. Infinitive forms in the input were tagged for modal or non-modal reference based on the sentential context in which they appeared. The output of the model was compared to the results of corpus studies and recent experimental data which call into question the strict distinction between Dutch and English advocated by Hoekstra & Hyams
Recommended from our members
Simulating the Noun-Verb Asymmetry in the Productivity of Children’s Speech
Several authors propose that children may acquire syntactic categories on the basis of co-occurrence statistics of words in the input. This paper assesses the relative merits of two such accounts by assessing the type and amount of productive language that results from computing co-occurrence statistics over conjoint and independent preceding and following contexts. This is achieved through the implementation of these methods in MOSAIC, a computational model of syntax acquisition that produces utterances that can be directly compared to child speech, and has a developmental component (i.e. produces increasingly long utterances). It is shown that the computation of co-occurrence statistics over conjoint contexts or frames results in a pattern of productive speech that more closely resembles that displayed by language learning children. The simulation of the developmental patterning of children’s productive speech furthermore suggests two refinements to this basic mechanism: inclusion of utterance boundaries, and the weighting of frames for their lexical content
Recommended from our members
Subject omission in children's language; The case for performance limitations in learning.
Several theories have been put forward to explain the phenomenon that children who are learning to speak their native language tend to omit the subject of the sentence. According to the pro-drop hypothesis, children represent the wrong grammar. According to the performance limitations view, children represent the full grammar, but omit subjects due to performance limitations in production. This paper proposes a third explanation and presents a model which simulates the data relevant to subject omission. The model consists of a simple learning mechanism that carries out a distributional analysis of naturalistic input. It does not have any overt representation of grammatical categories, and its performance limitations reside mainly in its learning mechanism. The model clearly simulates the data at hand, without the need to assume large amounts of innate knowledge in the child, and can be considered more parsimonious on these grounds alone. Importantly, it employs a unified and objective measure of processing load, namely the length of the utterance, which interacts with frequency in the input. The standard performance limitations view assumes that processing load is dependent on a phrase’s syntactic role, but does not specify a unifying underlying principle
- …