4 research outputs found
Exploring the neuro-computational mechanisms underlying age-related changes in complex decision-making
Over the last decade, research in decision-making has made remarkable advancements in understanding how the relative engagement in model-based and model-free decision-making changes with healthy aging. While we are beginning to understand the factors that affect older adults’ shift away from model-based decision-making, the exact mechanisms at play are still poorly understood. This dissertation presents findings as well as a novel theory which aims to advance our understanding of these neuro-computational mechanisms. Chapter 2 demonstrates
that, in contrast to younger adults, older adults do not benefit from more distinct probabilistic transitions between stages in a two-step decision-making task. By examining trial-by-trial neurocomputational dynamics, this first empirical paper provides evidence for age-related deficits in the ability to represent probabilistic transitions, and predict the value of upcoming choice options. Chapter 3 presents a novel theory: the diminished state space theory of human aging. This theoretical contribution proposes that older adults’ deficits in model-based learning
are due to their underlying difficulties in representing state spaces. Chapter 4 examines one of the computational explanations brought forward in this theoretical paper. Namely, that older adults’ diminished state spaces may be explained (at least in part) by their difficulties updating their internal task representation. In line with this hypothesis, results demonstrate that in contrast to younger adults, older adults show difficulties identifying outcomes that signal the need to update their internal model. Together, these findings suggest that older adults’ deficits in model-based decision-making can be explained by their diminished state space representations, which in turn may in part result from their difficulty updating their internal model during cognitive tasks. Ultimately,
this dissertation provides important insights regarding older adults’ deficits, and opens future directions for the study of age-related changes in representational abilities
The eyes know it: Toddlers' visual scanning of sad faces is predicted by their theory of mind skills
The current research explored toddlers’ gaze fixation during a scene showing a person expressing sadness after a ball is stolen from her. The relation between the duration of gaze fixation on different parts of the person’s sad face (e.g., eyes, mouth) and theory of mind skills was examined. Eye tracking data indicated that before the actor experienced the negative event, toddlers divided their fixation equally between the actor’s happy face and other distracting objects, but looked longer at the face after the ball was stolen and she expressed sadness. The strongest predictor of increased focus on the sad face versus other elements of the scene was toddlers’ ability to predict others’ emotional reactions when outcomes fulfilled (happiness) or failed to fulfill (sadness) desires, whereas toddlers’ visual perspective- taking skills predicted their more specific focusing on the actor’s eyes and, for boys only, mouth. Furthermore, gender differences emerged in toddlers’ fixation on parts of the scene. Taken together, these findings suggest that top-down processes are involved in the scanning of emotional facial expressions in toddlers
Does mutual exclusivity guide infants’ interpretation of novel labels during categorization?
Labeling objects during categorization tasks has been repeatedly shown to help infants categorize
objects by highlighting their commonalities. Although much work supports this label-as
category-marker hypothesis, other findings support a label-as-feature hypothesis. According to
this view, labels start as object features, and only become category markers later in childhood.
Developing in parallel, infants appear to rely on specific word learning principles based on their
linguistic experience. That is, monolingual infants have been repeatedly shown to use a
disambiguation heuristic to map novel words to novel objects. The aim of the current study was
therefore to examine how monolingual infants categorize objects in an interactive categorization
task when presented with one or two labels. Based on previous work, we hypothesized that 18
month-old monolinguals would perform significantly worse when objects were given two labels,
than when they were given a single label. We also administered a mutual exclusivity task to
examine if toddlers’ expectation of a one-to-one mapping between words and object kinds is
related to their performance on the categorization task. Unexpectedly, toddlers’ categorization
was enhanced both when objects were given one or two labels. We discuss these findings and
suggest that future work should examine the manner in which monolingual infants process a
second novel label during category formation, and if it relates to their ability for disambiguation,
through the use of eye-tracking
Recommended from our members
The eyes know it: Toddlers' visual scanning of sad faces is predicted by their theory of mind skills.
The current research explored toddlers' gaze fixation during a scene showing a person expressing sadness after a ball is stolen from her. The relation between the duration of gaze fixation on different parts of the person's sad face (e.g., eyes, mouth) and theory of mind skills was examined. Eye tracking data indicated that before the actor experienced the negative event, toddlers divided their fixation equally between the actor's happy face and other distracting objects, but looked longer at the face after the ball was stolen and she expressed sadness. The strongest predictor of increased focus on the sad face versus other elements of the scene was toddlers' ability to predict others' emotional reactions when outcomes fulfilled (happiness) or failed to fulfill (sadness) desires, whereas toddlers' visual perspective-taking skills predicted their more specific focusing on the actor's eyes and, for boys only, mouth. Furthermore, gender differences emerged in toddlers' fixation on parts of the scene. Taken together, these findings suggest that top-down processes are involved in the scanning of emotional facial expressions in toddlers