32 research outputs found

    Reading positional codes with fMRI: Problems and solutions.

    Get PDF
    Neural mechanisms which bind items into sequences have been investigated in a large body of research in animal neurophysiology and human neuroimaging. However, a major problem in interpreting this data arises from a fact that several unrelated processes, such as memory load, sensory adaptation, and reward expectation, also change in a consistent manner as the sequence unfolds. In this paper we use computational simulations and data from two fMRI experiments to show that a host of unrelated neural processes can masquerade as sequence representations. We show that dissociating such unrelated processes from a dedicated sequence representation is an especially difficult problem for fMRI data, which is almost exclusively the modality used in human experiments. We suggest that such fMRI results must be treated with caution and in many cases the assumed neural representation might actually reflect unrelated processes.This study was funded via the UK Medical Research Council intramural grant MCA060-5PR30

    Visual recency bias is explained by a mixture model of internal representations.

    Get PDF
    Human bias towards more recent events is a common and well-studied phenomenon. Recent studies in visual perception have shown that this recency bias persists even when past events contain no information about the future. Reasons for this suboptimal behavior are not well understood and the internal model that leads people to exhibit recency bias is unknown. Here we use a well-known orientation estimation task to frame the human recency bias in terms of incremental Bayesian inference. We show that the only Bayesian model capable of explaining the recency bias relies on a weighted mixture of past states. Furthermore, we suggest that this mixture model is a consequence of participants' failure to infer a model for data in visual short-term memory, and reflects the nature of the internal representations used in the task

    Classifying complex documents: comparing bespoke solutions to large language models

    Full text link
    Here we search for the best automated classification approach for a set of complex legal documents. Our classification task is not trivial: our aim is to classify ca 30,000 public courthouse records from 12 states and 267 counties at two different levels using nine sub-categories. Specifically, we investigated whether a fine-tuned large language model (LLM) can achieve the accuracy of a bespoke custom-trained model, and what is the amount of fine-tuning necessary

    Sequence learning recodes cortical representations instead of strengthening initial ones.

    Get PDF
    We contrast two computational models of sequence learning. The associative learner posits that learning proceeds by strengthening existing association weights. Alternatively, recoding posits that learning creates new and more efficient representations of the learned sequences. Importantly, both models propose that humans act as optimal learners but capture different statistics of the stimuli in their internal model. Furthermore, these models make dissociable predictions as to how learning changes the neural representation of sequences. We tested these predictions by using fMRI to extract neural activity patterns from the dorsal visual processing stream during a sequence recall task. We observed that only the recoding account can explain the similarity of neural activity patterns, suggesting that participants recode the learned sequences using chunks. We show that associative learning can theoretically store only very limited number of overlapping sequences, such as common in ecological working memory tasks, and hence an efficient learner should recode initial sequence representations

    Parakeele kasutus uues meedias

    Get PDF
    This thesis presents a hypothesis, that it is possible to learn a significant part of the paralanguage in new media by imitation and tests the hypothesis by using a connectionist neural network learning on new media text corpus. A connectionist neural network is created, which is trained to detect patterns in the corpus, which relate the use of paralanguage to its context. For this purpose, the corpus is divided into training and testing data. Both sets of data consist of pairs of identical sentences from the corpus, only the latter part of the pair is without paralanguage syntax. The network is trained to predict the use of paralanguage by comparing the sentences with and without the paralanguage. No guidelines or rules concerning paralanguage are given to the network, it learns by comparing the prediction to the correct answer in the training set and back- propagating the error until significant part of the predictions turn out to be true. By this method, the network starts to “learn” the paralanguage. The thesis investigates how much of the new media paralanguage can be learned by using the method and can the process of learning be described as imitational. The analysis of the network is conducted by investigating the global learning error rate and by coding the corpus with different levels of information. The results show that if we code the sentences from new media texts with context information – like punctuation, the number and position of the sentence in dialogue – the network is able to predict approximately 30% of the paralanguage usage correctly in separated sentences or speech acts. The thesis concludes that 30% is a significant part of the paralanguage and the learning process can be described more as imitational as opposed to rule-based semantic learning. The result leads to several conclusions a significant part of the paralanguage in new media can be generated correctly without the knowledge of their actual meanings although we can describe the learning process as imitational, the learning is not possible until context information about the conversation and dialogue is presented to the network that leads to the conclusion that paralanguage in new media has a significant role as a communications facilitatorhttp://tartu.ester.ee/record=b2114773~S1*es

    Recall is not necessary for verbal sequence learning.

    Get PDF
    The question of whether overt recall of to-be-remembered material accelerates learning is important in a wide range of real-world learning settings. In the case of verbal sequence learning, previous research has proposed that recall either is necessary for verbal sequence learning (Cohen & Johansson Journal of Verbal Learning and Verbal Behavior, 6, 139-143, 1967; Cunningham, Healy, & Williams Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 575-597, 1984), or at least contributes significantly to it (Glass, Krejci, & Goldman Journal of Memory and Language, 28, 189-199, 1989; Oberauer & Meyer Memory, 17, 774-781, 2009). In contrast, here we show that the amount of previous spoken recall does not predict learning and is not necessary for it. We suggest that previous research may have underestimated participants' learning by using suboptimal performance measures, or by using manual or written recall. However, we show that the amount of spoken recall predicted how much interference from other to-be-remembered sequences would be observed. In fact, spoken recall mediated most of the error learning observed in the task. Our data support the view that the learning of overlapping auditory-verbal sequences is driven by learning the phonological representations and not the articulatory motor responses. However, spoken recall seems to reinforce already learned representations, whether they are correct or incorrect, thus contributing to a participant identifying a specific stimulus as either "learned" or "new" during the presentation phase
    corecore