245,490 research outputs found

    Visual dynamics of cross situational word learning, object perception, and discrimination

    Get PDF
    In order to learn a new word, young children must bring together processes of visual attention, visual looking and learning, visual binding of what is where, and processes for coordinating, forming, and updating of word-object links across multiple presentations. Recent research has explored the problem of word learning via a cross situational paradigm because it enables researchers to investigate how children learn words in the context of ambiguous presentations of multiple words and objects extended over time. More recent research has begun to explore the mechanisms at work to support successful word learning in these experiments. Through manipulations of word and object orders, research has shown that memory and attentional looking are vital components that support word learning. The aim of this thesis is to use novel stimuli with tightly controlled visual properties to better understand the process of word learning in a cross situational paradigm. Overall, the results support two perspectives into cross situational word learning; one of gradual learning through attentional looking and association building and one of preferential object driven looking (familiar/ novel). These findings were replicated across all three of our cross situational studies regardless of procedure and stimuli. Although our stimuli with tightly controlled visual properties were difficult to discriminate, we do not believe they are the sole cause of our failure to find word learning in children as a group. As our experiment on object discrimination, found that participants were able to discriminate between the stimuli. Rather we believe our failure find overall learning in the cross situational word learning task, despite three near replications, suggests that this paradigm may not support robust word learning in infants. Thus, we conclude that there is need for more investigation into how individual differences in looking dynamics, over the course of training may influence later word learning

    The Impact of Perceived Emotions on Early Word Learning

    Get PDF
    With the background that young word learners learn word-world associations in social interactions full of emotional expressions (Clark, 2016; Fernald et al., 1989) and others’ emotional expressions affect individuals’ attention allocation and memory during learning (Dolan & Vuilleumier, 2003; Kensinger, 2004; Yiend, 2010), the emotions perceived by individuals influence the learning process and outcome. Although, compared to affectively neutral expressions, infants allocated more attention to emotional vocal and facial expressions (e.g., Cooper & Aslin, 1990; Grossmann et al., 2011) and objects associated with negative emotions (e.g., Carver & Vaccaro, 2007), the impact of perceived emotions on early word learning remains unclear. To address the question, the current work encompasses three eye tracking experiments measuring proportion looking time of 24-, 30- and 36-month-old toddlers and adults when they learned three novel label-object associations respectively in affectively neutral, positive, and negative contexts in a referent selection learning task and when they recognised the new-learned label-object and emotion-object associations in retention testing tasks. The first experiment (Chapter 2) examined whether the perceived emotions influence adults’ and 30-month-old toddlers’ learning and retention of label-object and emotion-object associations and compared adults’ and toddlers’ looking behaviours during learning. Results suggested the recognition of newly learned association revealed the level of memory ability and the outcome of a competition for attention between top-down and bottom-up processing. Adults demonstrated a mature memory ability and top-down control and recognised all the13 label-object and emotion-object associations. But toddlers’ retention might be interfered with the presence of salient negative distractor. Based on the findings of the first experiment, the second experiment (Chapter 3) investigated the possibility that the salient negative distractor masked 30-month-olds’ retention and explored the implicit impact of negative objects on toddlers’ visual attention. After removing the negative distractor from half of a retention task, the 30-month-olds successfully recognised all the label-object associations regardless of the emotions that the objects associated with. Regarding the implicit impact of negative objects, toddlers tended to look to the negative object when it presented, suggesting it captured toddlers’ visual attention relative to its neutral and positive counterparts. Thus, a negativity bias was found. The third experiment (Chapter 4) measured the word learning outcome and retention of emotion-object associations in toddlers of 24-month-old and 36-month-old to further examined the effect perceived emotions on early learning. The older toddlers’ word learning was not affected by the perceived emotions while the younger toddlers’ word learning was promoted by the perceived negative affect during learning. Both age groups only recognised the negative emotion-object associations, revealing the ability to memorise the association between emotional cues and objects is still developing at the age of 36-month-olds. Overall, for the toddlers as young as 24-month-old, the perceived negative affect facilitates the learning of label-object associations. But for the toddlers older than 30-monthold, their word learning is not influenced by the perceived emotions. Meanwhile, toddlers’ visual attention is interfered with the distractor associated with negative affect, suggesting the14 negativity bias in terms of visual processing. Additionally, the finding of the impact of negativity bias on toddlers’ visual attention raises an issue relating the methodology that the reliability of proportion looking time as an index of retention is undermined when perceptually salient competitors are presented. All in all, the current thesis showed not only how the impact of perceived emotions on early word learning, but also the methodological consideration for the eye tracking word learning experiments

    Visual and verbal serial list learning in patients with statistically-determined mild cognitive impairment

    Get PDF
    Objective: To compare verbal versus visual serial list learning test performance in mild cognitive impairment (MCI) and assess relationships between serial list learning and hippocampal volume. Methods: Patients were diagnosed with non-MCI, amnestic MCI (aMCI), and combined mixed/dysexecutive MCI (mixed/dysMCI). Outcome measures included immediate/delay free recall, and delay recognition performance from the 12-word Philadelphia Verbal Learning Test (PrVLT) and the Brief Visuospatial Memory Test-Revised (BVMT-R). Lateral hippocampal volumes were obtained. Results: Non-MCI patients scored better than other groups on P(r)VLT immediate/delay free recall. aMCI patients scored lower than other groups on P(r)VLT delay recognition. Non-MCI patients were superior to MCI groups on all BVMT-R parameters. All groups scored lower on BVMT-R compared to analogous P(r)VLT parameters. Better P(r)VLT immediate/delay free recall was associated with greater left hippocampal volume. BVMT-R 2-point, full credit responses were associated with greater right hippocampal volume; memory for object location was associated with left hippocampal volume. Conclusions: Both serial list learning tests identify memory impairment. The association for the BVMT-R and bilateral hippocampal volume suggests a wider neurocognitive network may be recruited for visual serial list learning. These data suggest that visual serial list learning may be particularly sensitive to emergent cognitive impairment

    Automatic Discovery, Association Estimation and Learning of Semantic Attributes for a Thousand Categories

    Full text link
    Attribute-based recognition models, due to their impressive performance and their ability to generalize well on novel categories, have been widely adopted for many computer vision applications. However, usually both the attribute vocabulary and the class-attribute associations have to be provided manually by domain experts or large number of annotators. This is very costly and not necessarily optimal regarding recognition performance, and most importantly, it limits the applicability of attribute-based models to large scale data sets. To tackle this problem, we propose an end-to-end unsupervised attribute learning approach. We utilize online text corpora to automatically discover a salient and discriminative vocabulary that correlates well with the human concept of semantic attributes. Moreover, we propose a deep convolutional model to optimize class-attribute associations with a linguistic prior that accounts for noise and missing data in text. In a thorough evaluation on ImageNet, we demonstrate that our model is able to efficiently discover and learn semantic attributes at a large scale. Furthermore, we demonstrate that our model outperforms the state-of-the-art in zero-shot learning on three data sets: ImageNet, Animals with Attributes and aPascal/aYahoo. Finally, we enable attribute-based learning on ImageNet and will share the attributes and associations for future research.Comment: Accepted as a conference paper at CVPR 201

    Semantic memory

    Get PDF
    The Encyclopedia of Human Behavior, Second Edition is a comprehensive three-volume reference source on human action and reaction, and the thoughts, feelings, and physiological functions behind those actions
    • …
    corecore