3,265 research outputs found

    Modelling the perceptual similarity of facial expressions from image statistics and neural responses

    Get PDF
    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Mapping the development of visual information use for facial expression recognition

    Get PDF
    Dans cette thèse, je souhaitais cartographier le développement de la reconnaissance des expressions faciales de la petite enfance à l'âge adulte en identifiant, et ceci pour la première fois dans la littérature développementale, la quantité et la qualité d’informations visuelles nécessaires pour reconnaître les six émotions « de base ». En utilisant des mesures comportementales et oculaires, les contributions originales de cette thèse incluent: 1. Une cartographie fine et impartiale du développement continu de la reconnaissance des six expressions faciales de base avec l'introduction d'une mesure psychophysique de pointe; 2. L'identification de deux phases principales dans le développement de la reconnaissance des expressions faciales, allant de 5 à 12 ans et de 13 à l'âge adulte; 3. Une évaluation fine de la quantité d'informations (signal) et d'intensité nécessaires pour reconnaître les six émotions fondamentales du développement ; 4. Le traitement des informations relatives au signal et à l'intensité devient plus discriminant au cours du développement, car avec l'âge, moins d'informations sont nécessaires pour reconnaître la colère, le dégoût, la surprise et la tristesse. 5. Une nouvelle analyse des profils de réponse (la séquence de réponses entre les essais) a révélé des changements subtils mais importants dans la séquence de réponses sur un continuum d'âge: les profils deviennent plus similaires avec l'âge en raison de catégorisations erronées moins aléatoires; 6. La comparaison de deux mesures de reconnaissance au sein de la même cohorte, révélant que deux types de stimuli couramment utilisés dans les études sur les expressions émotionnelles (expressions à intensité maximale vs expressions d'intensités variables) ne peuvent pas être directement comparés au cours du développement; 7. De nouvelles analyses des mouvements oculaires ont révélé l'âge auquel les stratégies perceptuelles pour la reconnaissance d'expressions faciales émotionnelles deviennent matures. Une première revue de la littérature a révélé plusieurs domaines moins étudiés du développement de la reconnaissance de l'expression faciale, sur lesquels j'ai choisi de me concentrer pour ma thèse. Tout d'abord, au début de cette thèse, aucune étude n'a été menée sur le développement continu de la reconnaissance des expressions faciales depuis la petite enfance jusqu'à l'âge adulte. De même, aucune étude n’a examiné les six expressions dites «de base» et une expression neutre dans le même paradigme. Par conséquent, l’objectif de la première étude était de fournir une cartographie fine du développement continu des six expressions de base et neutre de l’âge de 5 ans à l’âge adulte en introduisant une nouvelle méthode psychophysique dans la littérature sur le développement. La procédure psychophysique adaptatived a fourni une mesure précise de la performance de reconnaissance à travers le développement. En utilisant une régression linéaire, nous avons ensuite tracé les trajectoires de développement pour la reconnaissance de chacune des 6 émotions de base et neutres. Cette cartographie de la reconnaissance à travers le développement a révélé des expressions qui montraient une nette amélioration avec l'âge - dégoût, neutre et colère; des expressions qui montrent une amélioration graduelle avec l’âge - tristesse, surprise; et celles qui sont restés stables depuis leur plus tendre enfance - la joie et la peur; indiquant que le codage de ces expressions est déjà mature à 5 ans. Deux phases principales ont été identifiées dans le développement de la reconnaissance des expressions faciales, car les seuils de reconnaissance étaient les plus similaires entre les âges de 5 à 12 ans et de 13 ans jusqu'à l'âge adulte. Dans la deuxième étude, nous voulions approfondir cette cartographie fine du développement de la reconnaissance des expressions faciales en quantifiant la quantité d'informations visuelles nécessaires pour reconnaître une expression au cours du développement en comparant deux mesures d'informations visuelles, le signal et l'intensité. Encore une fois, en utilisant une approche psychophysique, cette fois avec un plan de mesures répétées, la quantité de signal et l'intensité nécessaires pour reconnaître les expressions de tristesse, colère, dégoût et surprise ont diminué avec l'âge. Par conséquent, le traitement des deux types d’informations visuelles devient plus discriminant au cours du développement car moins d’informations sont nécessaires avec l’âge pour reconnaître ces expressions. L'analyse mutuelle des informations a révélé que l'intensité et le traitement du signal ne sont similaires qu'à l'âge adulte et que, par conséquent, les expressions à intensité maximale (dans la condition du signal) et les expressions d'intensité variable (dans la condition d'intensité) ne peuvent être comparées directement pendant le développement. Alors que les deux premières études de cette thèse traitaient de la quantité d'informations visuelles nécessaires pour reconnaître une expression tout au long du développement, le but de la troisième étude était de déterminer quelle information est utilisée dans le développement pour reconnaître une expression utilisant l'eye-tracking. Nous avons enregistré les mouvements oculaires d’enfants âgés de 5 ans à l'âge adulte lors de la reconnaissance des six émotions de base en utilisant des conditions de vision naturelles et des conditions contingentes du regard. L'analyse statistique multivariée des données sur les mouvements oculaires au cours du développement a révélé l'âge auquel les stratégies perceptuelles pour la reconnaissance des expressions faciales des émotions deviennent matures. Les stratégies de mouvement oculaire du groupe d'adolescents les plus âgés, 17 à 18 ans, étaient les plus similaires aux adultes, quelle que soit leur expression. Une dépression dans le développement de la similarité stratégique avec les adultes a été trouvé pour chaque expression émotionnelle entre 11 et 14 ans et légèrement avant, entre 7 et 8 ans, pour la joie. Enfin, la précision de la reconnaissance des expressions de joie, colère et tristesse ne diffère pas d’un groupe d’âge à l’autre, mais les stratégies des mouvements oculaires divergent, ce qui indique que diverses approches sont possibles pour atteindre une performance optimale. En résumé, les études cartographient les trajectoires complexes et non uniformes du développement de la reconnaissance des expressions faciales en comparant l'utilisation des informations visuelles depuis la petite enfance jusqu'à l'âge adulte. Les études montrent non seulement dans quelle mesure la reconnaissance des expressions faciales se développe avec l’âge, mais aussi comment cette expression est obtenue tout au long du développement en déterminant si les stratégies perceptuelles sont similaires à travers les âges et à quel stade elles peuvent être considérées comme matures. Les études visaient à fournir la base d’une compréhension du développement continu de la reconnaissance des expressions faciales, qui faisait auparavant défaut dans la littérature. Les travaux futurs visent à approfondir cette compréhension en examinant comment la reconnaissance des expressions se développe en relation avec d'autres aspects du traitement cognitif et émotionnel ce qui pourrait permettre d'éclaircir si des aspects neuro-développementaux seraient à l’origine de la dépression présente entre 7-8 et 11-14 ans lorsque l’on compare les stratégies de fixations des enfants à celles des adultes.In this thesis, I aimed to map the development of facial expression recognition from early childhood up to adulthood by identifying for the first time in the literature the quantity and quality of visual information needed to recognise the six 'basic' emotions. Using behavioural and eye tracking measures, the original contributions of this thesis include: 1. An unbiased fine-grained mapping of the continued development of facial expression recognition for the six basic emotions with the introduction of a psychophysical measure to the literature; 2. The identification of two main phases in the development of facial expression recognition, ranging from 5 to 12 years old and 13 years old to adulthood; 3. The quantity of signal and intensity information needed to recognise the six basic emotions across development; 4. The processing of signal and intensity information becomes more discriminative during development as less information is needed with age to recognise anger, disgust, surprise and sadness; 5. Novel analysis of response profiles (the sequence of responses across trials) revealed subtle but important changes in the sequence of responses along a continuum of age - profiles become more similar with age due to less random erroneous categorizations; 6. The comparison of two recognition measures across the same cohort revealing that two types of stimuli commonly used in facial emotion processing studies (expressions at full intensity vs. expressions of varying intensities) cannot be straightforwardly compared during development; 7. Novel eye movement analyses revealed the age at which perceptual strategies for the recognition of facial expressions of emotion become mature. An initial review of the literature revealed several less studied areas of the development of facial expression recognition, which I chose to focus on for my thesis. Firstly, at the outset of this thesis there were no studies of the continued development of facial expression recognition from early childhood up to adulthood. Similarly, there were no studies which examined all six of, what are termed, the 'basic emotions' and a neutral expression within the same paradigm. Therefore, the objective of the first study was to provide a fine-grained mapping of the continued development for all six basic expressions and neutral from the age of 5 up to adulthood by introducing a novel psychophysical method to the developmental literature. The psychophysical adaptive staircase procedure provided a precise measure of recognition performance across development. Using linear regression, we then charted the developmental trajectories for recognition of each of the 6 basic emotions and neutral. This mapping of recognition across development revealed expressions that showed a steep improvement with age – disgust, neutral, and anger; expressions that showed a more gradual improvement with age – sadness, surprise; and those that remained stable from early childhood – happiness and fear; indicating that the coding for these expressions is already mature by 5 years of age. Two main phases were identified in the development of facial expression recognition as recognition thresholds were most similar between the ages of 5 to 12 and 13 to adulthood. In the second study we aimed to take this fine-grained mapping of the development of facial expression recognition further by quantifying how much visual information is needed to recognise an expression across development by comparing two measures of visual information, signal and intensity. Again, using a psychophysical approach, this time with a repeated measures design, the quantity of signal and intensity needed to recognise sad, angry, disgust, and surprise expressions decreased with age. Therefore, the processing of both types of visual information becomes more discriminative during development as less information is needed with age to recognize these expressions. Mutual information analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, expressions at full intensity (as in the signal condition) and expressions of varying intensities (as in the intensity condition) cannot be straightforwardly compared during development. While the first two studies of this thesis addressed how much visual information is needed to recognise an expression across development, the aim of the third study was to investigate which information is used across development to recognise an expression using eye-tracking. We recorded the eye movements of children from the age of 5 up to adulthood during recognition of the six basic emotions using natural viewing and gaze-contingent conditions. Multivariate statistical analysis of the eye movement data across development revealed the age at which perceptual strategies for the recognition of facial expressions of emotion become mature. The eye movement strategies of the oldest adolescent group, 17- to 18-year-olds, were most similar to adults for all expressions. A developmental dip in strategy similarity to adults was found for each emotional expression between 11- to 14-years, and slightly earlier, 7- to 8-years, for happiness. Finally, recognition accuracy for happy, angry, and sad expressions did not differ across age groups but eye movement strategies diverged, indicating that diverse approaches are possible for reaching optimal performance. In sum, the studies map the intricate and non-uniform trajectories of the development of facial expression recognition by comparing visual information use from early childhood up to adulthood. The studies chart not only how well recognition of facial expressions develops with age, but also how facial expression recognition is achieved throughout development by establishing whether perceptual strategies are similar across age and at what stage they can be considered mature. The studies aimed to provide the basis of an understanding of the continued development of facial expression recognition which was previously lacking from the literature. Future work aims to further this understanding by investigating how facial expression recognition develops in relation to other aspects of cognitive and emotional processing and to investigate the potential neurodevelopmental basis of the developmental dip found in fixation strategy similarity

    The Role of Physical Image Properties in Facial Expression and Identity Perception

    Get PDF
    A number of attempts have been made to understand which physical image properties are important for the perception of different facial characteristics. These physical image properties have been broadly split in to two categories; namely facial shape and facial surface. Current accounts of face processing suggest that whilst judgements of facial identity rely approximately equally on facial shape and surface properties, judgements of facial expression are heavily shape dependent. This thesis presents behavioural experiments and fMRI experiments employing multi voxel pattern analysis (MVPA) to investigate the extent to which facial shape and surface properties underpin identity and expression perception and how these image properties are represented neurally. The first empirical chapter presents experiments showing that facial expressions are categorised approximately equally well when either facial shape or surface is the varying image cue. The second empirical chapter shows that neural patterns of response to facial expressions in the Occipital Face Area (OFA) and Superior Temporal Sulcus (STS) are reflected by patterns of perceptual similarity of the different expressions, in turn these patterns of perceptual similarity can be predicted by both facial shape and surface properties. The third empirical chapter demonstrates that distinct patterns of neural response can be found to shape based but not surface based cues to facial identity in the OFA and Fusiform Face Area (FFA). The final experimental chapter in this thesis demonstrates that the newly discovered contrast chimera effect is heavily dependent on the eye region and holistic face representations conveying facial identity. Taken together, these findings show the importance of facial surface as well as facial shape in expression perception. For facial identity both facial shape and surface cues are important for the contrast chimera effect although there are more consistent identity based neural response patterns to facial shape in face responsive brain regions

    Representational structure of fMRI/EEG responses to dynamic facial expressions

    Get PDF
    Face perception provides an excellent example of how the brain processes nuanced visual differences and trans-forms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expres-sion, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.Peer reviewe

    Generation of realistic human behaviour

    Get PDF
    As the use of computers and robots in our everyday lives increases so does the need for better interaction with these devices. Human-computer interaction relies on the ability to understand and generate human behavioural signals such as speech, facial expressions and motion. This thesis deals with the synthesis and evaluation of such signals, focusing not only on their intelligibility but also on their realism. Since these signals are often correlated, it is common for methods to drive the generation of one signal using another. The thesis begins by tackling the problem of speech-driven facial animation and proposing models capable of producing realistic animations from a single image and an audio clip. The goal of these models is to produce a video of a target person, whose lips move in accordance with the driving audio. Particular focus is also placed on a) generating spontaneous expression such as blinks, b) achieving audio-visual synchrony and c) transferring or producing natural head motion. The second problem addressed in this thesis is that of video-driven speech reconstruction, which aims at converting a silent video into waveforms containing speech. The method proposed for solving this problem is capable of generating intelligible and accurate speech for both seen and unseen speakers. The spoken content is correctly captured thanks to a perceptual loss, which uses features from pre-trained speech-driven animation models. The ability of the video-to-speech model to run in real-time allows its use in hearing assistive devices and telecommunications. The final work proposed in this thesis is a generic domain translation system, that can be used for any translation problem including those mapping across different modalities. The framework is made up of two networks performing translations in opposite directions and can be successfully applied to solve diverse sets of translation problems, including speech-driven animation and video-driven speech reconstruction.Open Acces

    Cultural differences in the decoding and representation of facial expression signals

    Get PDF
    Summary. In this thesis, I will challenge one of the most fundamental assumptions of psychological science – the universality of facial expressions. I will do so by first reviewing the literature to reveal major flaws in the supporting arguments for universality. I will then present new data demonstrating how culture has shaped the decoding and transmission of facial expression signals. A summary of both sections are presented below. Review of the Literature To obtain a clear understanding of how the universality hypothesis developed, I will present the historical course of the emotion literature, reviewing relevant works supporting notions of a ‘universal language of emotion.’ Specifically, I will examine work on the recognition of facial expressions across cultures as it constitutes a main component of the evidence for universality. First, I will reveal that a number of ‘seminal’ works supporting the universality hypothesis are critically flawed, precluding them from further consideration. Secondly, by questioning the validity of the statistical criteria used to demonstrate ‘universal recognition,’ I will show that long-standing claims of universality are both misleading and unsubstantiated. On a related note, I will detail the creation of the ‘universal’ facial expression stimulus set (Facial Action Coding System -FACS- coded facial expressions) to reveal that it is in fact a biased, culture-specific representation of Western facial expressions of emotion. The implications for future cross-cultural work are discussed in relation to the limited FACS-coded stimulus set. Experimental Work In reviewing the literature, I will reveal a latent phenomenon which has so far remained unexplained – the East Asian (EA) recognition deficit. Specifically, EA observers consistently perform significantly poorer when categorising certain ‘universal’ facial expressions compared to Western Caucasian (WC) observers – a surprisingly neglected finding given the importance of emotion communication for human social interaction. To address this neglected issue, I examined both the decoding and transmission of facial expression signals in WC and EA observers. Experiment 1: Cultural Decoding of ‘Universal’ Facial Expressions of Emotion To examine the decoding of ‘universal’ facial expressions across cultures, I used eye tracking technology to record the eye movements of WC and EA observers while they categorised the 6 ‘universal’ facial expressions of emotion. My behavioural results demonstrate the robustness of the phenomenon by replicating the EA recognition deficit (i.e., EA observers are significantly poorer at recognizing facial expressions of ‘fear’ and ‘disgust’). Further inspection of the data also showed that EA observers systematically miscategorise ‘fear’ as ‘surprise’ and ‘disgust’ as ‘anger.’ Using spatio-temporal analyses of fixations, I will show that WC and EA observers use culture-specific fixation strategies to decode ‘universal’ facial expressions of emotion. Specifically, while WC observers distribute fixations across the face, sampling the eyes and mouth, EA observers persistently bias fixations towards the eyes and neglect critical features, especially for facial expressions eliciting significant confusion (i.e., ‘fear,’ ‘disgust,’ and ‘anger’). My behavioural data showed that EA observers systematically miscategorise ‘fear’ as ‘surprise’ and ‘disgust’ as ‘anger.’ Analysis of my eye movement data also showed that EA observers repetitively sample information from the eye region during facial expression decoding, particularly for those eliciting significant behavioural confusions (i.e., ‘fear,’ ‘disgust,’ and ‘anger’). To objectively examine whether the EA culture-specific fixation pattern could give rise to the reported behavioural confusions, I built a model observer that samples information from the face to categorise facial expressions. Using this model observer, I will show that the EA decoding strategy is inadequate to distinguish ‘fear’ from ‘surprise’ and ‘disgust’ from ‘anger,’ thus giving rise to the reported EA behavioural confusions. For the first time, I will reveal the origins of a latent phenomenon - the EA recognition deficit. I discuss the implications of culture-specific decoding strategies during facial expression categorization in light of current theories of cross-cultural emotion communication. Experiment 2: Cultural Internal Representations of Facial Expressions of Emotion In the previous two experiments, I presented data that questions the universality of facial expressions. As replicated in Experiment 1, WC and EA observers differ significantly in their recognition performance for certain ‘universal’ facial expressions. In Experiment 1, I showed culture-specific fixation patterns, demonstrating cultural differences in the predicted locations of diagnostic information. Together, these data predict cultural specificity in facial expression signals, supporting notions of cultural ‘accents’ and/or ‘dialects.’ To examine whether facial expression signals differ across cultures, I used a powerful reverse correlation (RC) technique to reveal the internal representations of the 6 ‘basic’ facial expressions of emotion in WC and EA observers. Using complementary statistical image processing techniques to examine the signal properties of each internal representation, I will directly reveal cultural specificity in the representations of the 6 ‘basic’ facial expressions of emotion. Specifically, I will show that while WC representations of facial expressions predominantly featured the eyebrows and mouth, EA representations were biased towards the eyes, as predicted by my eye movement data in Experiment 1. I will also show gaze avoidance as unique feature of the EA group. In sum, this data shows clear cultural contrasts in facial expression signals by showing that culture shapes the internal representations of emotion. Future Work My review of the literature will show that pivotal concepts such as ‘recognition’ and ‘universality’ are currently flawed and have misled both the interpretation of empirical work the direction of theoretical developments. Here, I will examine each concept in turn and propose more accurate criteria with which to demonstrate ‘universal recognition’ in future studies. In doing so, I will also detail possible future studies designed to address current gaps in knowledge created by use of inappropriate criteria. On a related note, having questioned the validity of FACS-coded facial expressions as ‘universal’ facial expressions, I will highlight an area for empirical development – the creation of a culturally valid facial expression stimulus set – and detail future work required to address this question. Finally, I will discuss broader areas of interest (i.e., lexical structure of emotion) which could elevate current knowledge of cross-cultural facial expression recognition and emotion communication in the future

    Less than meets the eye: the diagnostic information for visual categorization

    Get PDF
    Current theories of visual categorization are cast in terms of information processing mechanisms that use mental representations. However, the actual information contents of these representations are rarely characterized, which in turn hinders knowledge of mechanisms that use them. In this thesis, I identified these contents by extracting the information that supports behavior under given tasks - i.e., the task-specific diagnostic information. In the first study (Chapter 2), I modelled the diagnostic face information for familiar face identification, using a unique generative model of face identity information combined with perceptual judgments and reverse correlation. I then demonstrated the validity of this information using everyday perceptual tasks that generalize face identity and resemblance judgments to new viewpoints, age, and sex with a new group of participants. My results showed that human participants represent only a proportion of the objective identity information available, but what they do represent is both sufficiently detailed and versatile to generalize face identification across diverse tasks successfully. In the second study (Chapter 3), I modelled the diagnostic facial movement for facial expressions of emotion recognition. I used the models that characterize the mental representations of six facial expressions of emotion (Happy, Surprise, Fear, Anger, Disgust, and Sad) in individual observers. I validated them on a new group of participants. With the validated models, I derived main signal variants for each emotion and their probabilities of occurrence within each emotion. Using these variants and their probability, I trained a Bayesian classifier and showed that the Bayesian classifier mimics human observers’ categorization performance closely. My results demonstrated that such emotion variants and their probabilities of occurrence comprise observers’ mental representations of facial expressions of emotion. In the third study (Chapter 4), I investigated how the brain reduces high dimensional visual input into low dimensional diagnostic representations to support a scene categorization. To do so, I used an information theoretic framework called Contentful Brain and Behavior Imaging (CBBI) to tease apart stimulus information that supports behavior (i.e., diagnostic) from that which does not (i.e., nondiagnostic). I then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using CBBI, I demonstrated a rapid (~170 ms) reduction of nondiagnostic information occurs in the occipital cortex and the progression of diagnostic information into right fusiform gyrus where they are constructed to support distinct behaviors. My results highlight how CBBI can be used to investigate the information processing from brain activity by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm in neuroimaging studies. I discussed the task-specific diagnostic information as individuals’ dynamic and experienced-based representation about the physical world, which provides us the much-needed information to search and understand the black box of high-dimensional, deep and biological brain networks. I also discussed the practical concerns about using the data-driven approach to uncover diagnostic information

    Revealing the information contents of memory within the stimulus information representation framework

    Get PDF
    The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization–stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed
    • …
    corecore