35 research outputs found
Results of Experiment 1.
<p>In this language, the absent construction was verb V4 in sentence structure C2. (a) Grammaticality judgments, showing proportion of times each sentence was judged to be grammatical for each of the four verbs (V1-V4) in the artificial language. The vertical axis shows the proportion of times each sentence was judged to be grammatical. These results are averaged over all judgments for each sentence, and averaged over all participants. The black vs. white bars indicate the results for the strong sampling vs. weak sampling condition respectively. The horizontal axis shows different sentence constructions (i.e., a particular verb-order). The results suggest that participants in both conditions were largely able to learn much of the grammatical structure. Also, participants in the weak sampling condition rated the exception construction, V4 in C2 significantly more grammatically acceptable than participants in the strong sampling condition, which is the prediction of our models. (b) Production results, showing proportion of productions made in each sentence structure for each verb. X denotes productions that were not in the form of any of the sentence structures. Again, results are averaged over all judgments for each sentence and averaged over all participants.</p
Model predictions.
<p>Model predictions for grammaticality judgments under strong sampling and weak sampling assumptions. The exception verb V4 is never shown in C2.</p
Two different sampling assumptions for language learning.
<p>a) Under the weak sampling assumption, the learner infers a mapping from sentence constructions (C1, C2, etc.) to grammaticality labels without making assumptions about how the sentences are generated. b) Under the strong sampling assumption, sentences are assumed to be generated from the distribution that the learner seeks to estimate.</p
Artificial language used in initial simulations and Experiment 1.
<p>Artificial language used in initial simulations and Experiment 1.</p
Presentation of linguistic input in Experiment 1.
<p>The strong sampling condition presented (a) positive examples generated by a speaker of the language and (b) negative examples generated by a non-speaker. The weak sampling condition presented (c) positive and (d) negative examples as feedback to a prediction about grammaticality. Note that because verb-action pairings were randomized between subjects, the same verb does not correspond to the same actions in the different conditions.</p
Results of Experiment 3.
<p>This language involved learning rules governing modifier contractions. All modifiers could appear in both positions, but only M1 was shown to be grammatical when contracted in both positions. M2 and M3 were only grammatical when contracted in one position. Thus, in the weak sampling condition M2 and M3 were shown to be ungrammatical when contracted in C1 and C2 respectively. The exception modifier-construction was M4 in P2, i.e., M4 was never shown contracted in position P2 during training for the models as well as the human participants. (a) Strong sampling and weak sampling model predictions for grammaticality judgments for contraction of each modifier, M1-M4. The vertical axis shows predicted grammaticality and horizontal axis shows the two different positions, P1 and P2 under which contractions could occur. (b) Human grammar judgments, showing proportion of times each sentence was judged to be grammatical. (c) Human sentence completion results, showing proportion of times that contraction was chosen over no-contraction for each modifier in each position.</p
Results of Experiment 2.
<p>In this language, the absent construction was verb V5 in sentence structure C2. (a) Grammaticality judgments, showing proportion of times each sentence was judged to be grammatical for each of the five verbs (V1-V5) in the artificial language. As in Experiment 1, participants in the weak sampling condition rated the exception construction, V5 in C2 significantly more grammatically acceptable than participants in the strong sampling condition, which is the prediction of our models. (b) Human production results, showing proportion of productions made in each sentence structure for each verb. X denotes productions that were not in the form of any of the sentence structures.</p
Artificial language used in simulations and Experiment 2.
<p>Artificial language used in simulations and Experiment 2.</p
Rational metareasoning and the plasticity of cognitive control
<div><p>The human brain has the impressive capacity to adapt how it processes information to high-level goals. While it is known that these cognitive control skills are malleable and can be improved through training, the underlying plasticity mechanisms are not well understood. Here, we develop and evaluate a model of how people learn when to exert cognitive control, which controlled process to use, and how much effort to exert. We derive this model from a general theory according to which the function of cognitive control is to select and configure neural pathways so as to make optimal use of finite time and limited computational resources. The central idea of our Learned Value of Control model is that people use reinforcement learning to predict the value of candidate control signals of different types and intensities based on stimulus features. This model correctly predicts the learning and transfer effects underlying the adaptive control-demanding behavior observed in an experiment on visual attention and four experiments on interference control in Stroop and Flanker paradigms. Moreover, our model explained these findings significantly better than an associative learning model and a Win-Stay Lose-Shift model. Our findings elucidate how learning and experience might shape people’s ability and propensity to adaptively control their minds and behavior. We conclude by predicting under which circumstances these learning mechanisms might lead to self-control failure.</p></div
Metacognitive reinforcement learning captures the effect of reward on learning from experienced conflict observed by Braem et al.
<p>(2012). a) Illustration of the Flanker task by Braem et al. (2012). b) Human data by Braem et al. (2012). c) Fit of LVOC model. d) Fit of Rescorla-Wagner model. e) Fit of Win-Stay Lose-Shift model.</p