8 research outputs found

    The Emotional Facet of Subjective and Neural Indices of Similarity.

    Get PDF
    Emotional similarity refers to the tendency to group stimuli together because they evoke the same feelings in us. The majority of research on similarity perception that has been conducted to date has focused on non-emotional stimuli. Different models have been proposed to explain how we represent semantic concepts, and judge the similarity among them. They are supported from behavioural and neural evidence, often combined by using Multivariate Pattern Analyses. By contrast, less is known about the cognitive and neural mechanisms underlying the judgement of similarity between real-life emotional experiences. This review summarizes the major findings, debates and limitations in the semantic similarity literature. They will serve as background to the emotional facet of similarity that will be the focus of this review. A multi-modal and overarching approach, which relates different levels of neuroscientific explanation (i.e., computational, algorithmic and implementation), would be the key to further unveil what makes emotional experiences similar to each other

    Symmetry in Emotional and Visual Similarity between Neutral and Negative Faces

    Get PDF
    From MDPI via Jisc Publications RouterHistory: accepted 2021-10-31, pub-electronic 2021-11-04Publication status: PublishedIs Mr. Hyde more similar to his alter ego Dr. Jekyll, because of their physical identity, or to Jack the Ripper, because both evoke fear and loathing? The relative weight of emotional and visual dimensions in similarity judgements is still unclear. We expected an asymmetric effect of these dimensions on similarity perception, such that faces that express the same or similar feeling are judged as more similar than different emotional expressions of same person. We selected 10 male faces with different expressions. Each face posed one neutral expression and one emotional expression (five disgust, five fear). We paired these expressions, resulting in 190 pairs, varying either in emotional expressions, physical identity, or both. Twenty healthy participants rated the similarity of paired faces on a 7-point scale. We report a symmetric effect of emotional expression and identity on similarity judgements, suggesting that people may perceive Mr. Hyde to be just as similar to Dr. Jekyll (identity) as to Jack the Ripper (emotion). We also observed that emotional mismatch decreased perceived similarity, suggesting that emotions play a prominent role in similarity judgements. From an evolutionary perspective, poor discrimination between emotional stimuli might endanger the individual

    Symmetry in emotional and visual similarity between neutral and negative faces

    Get PDF
    Is Mr. Hyde more similar to his alter ego Dr. Jekyll, because of their physical identity, or to Jack the Ripper, because both evoke fear and loathing? The relative weight of emotional and visual dimensions in similarity judgements is still unclear. We expected an asymmetric effect of these dimensions on similarity perception, such that faces that express the same or similar feeling are judged as more similar than different emotional expressions of same person. We selected 10 male faces with different expressions. Each face posed one neutral expression and one emotional expression (five disgust, five fear). We paired these expressions, resulting in 190 pairs, varying either in emotional expressions, physical identity, or both. Twenty healthy participants rated the similarity of paired faces on a 7-point scale. We report a symmetric effect of emotional expression and identity on similarity judgements, suggesting that people may perceive Mr. Hyde to be just as similar to Dr. Jekyll (identity) as to Jack the Ripper (emotion). We also observed that emotional mismatch decreased perceived similarity, suggesting that emotions play a prominent role in similarity judgements. From an evolutionary perspective, poor discrimination between emotional stimuli might endanger the individual.</jats:p

    The default network and the combination of cognitive processes that mediate self-generated thought

    Get PDF
    Self-generated cognitions, such as recalling personal memories or empathizing with others, are ubiquitous and essential for our lives. Such internal mental processing is ascribed to the default mode network—a large network of the human brain—although the underlying neural and cognitive mechanisms remain poorly understood. Here, we tested the hypothesis that our mental experience is mediated by a combination of activities of multiple cognitive processes. Our study included four functional magnetic resonance imaging experiments with the same participants and a wide range of cognitive tasks, as well as an analytical approach that afforded the identification of cognitive processes during self-generated cognition. We showed that several cognitive processes functioned simultaneously during self-generated mental activity. The processes had specific and localized neural representations, suggesting that they support different aspects of internal processing. Overall, we demonstrate that internally directed experience may be achieved by pooling over multiple cognitive processes

    Representation of Affect from FMRI Data as a Function of Stimulus Modality And Judgment Task

    Get PDF
    The theory of core affect posits that the neural system processes affective aspects of stimuli encountered by the organism quickly and automatically, resulting in a unified affective state described along the dimensions of valence and arousal. Core affect theory posits two functional subsystems that guide affective processing: a sensory integration and a visceramotor network. The proposed study investigates how the representation of affective dimensions depends on sensory modality, features of the task, and brain regions. A series of behavioral studies was run to develop an experimental stimulus set of silent videos and musical clips that met requirements of equating valence across stimulus types while holding arousal constant across valence categories. Valence manipulation was successful, with valence categories were equated on arousal ratings. The stimulus sets in the current study matched many of low level features between valence categories so that any difference between experimental conditions can most likely be attributed to the valence of the stimuli and not to the arousal levels or low level features of the stimuli. The fMRI study applied multiple multivariate analysis tools to analyze the fMRI data. General valence was successfully decoded from patterns of whole brain activation within participants. The successful cross-modal classification demonstrated that there is modality-general processing of valence at the whole brain level. The multidimensional scaling (MDS) results supported these conclusions by showing that a common valence dimension for visual and auditory trials as well as visual- and auditory-specific valence dimensions. The same analyses were applied to the predefined anatomical ROIs (mPFC, OFC, and STS) and revealed modality-general valence processing, evidenced by cross-modal classification and the MDS solution. Successful within-participant cross-modal classifications and unsuccessful cross-participant cross-modal classifications implies that modality-general representation of valence could be individual-specific, whereas, successful within-participant within-modal classifications and successful cross-participant within-modal classifications implies that modality-specific representations of valence might be individual-general. A first searchlight analysis was performed to localize the brain regions that were involved in modality-general valence and it identified three significant clusters: right transverse temporal gyrus, left superior temporal gyrus, and right middle temporal gyrus. These searchlight results were validated with cross-modal classification and MDS. The modality-specific regions found by a second searchlight analysis were in the occipital region for visual stimuli and the temporal region for auditory stimuli, as expected. Within-modality classification confirmed that those modality-congruent areas are involved in valence processing of the corresponding modality. Interestingly, each modality’s valence was also decoded from the modality-incongruent regions. These results imply modality-specific valence valuation for both modalities in each region, because cross-modal classification was not successful in these regions and MDS did not reveal a general valence dimension in either region. In sum, the neural representation of both modality-general and modality-specific valence were found at a whole brain level as well as frontal and temporal regions, consistent with the two system approach to core affect posited by Barrett and Bliss-Moreau (2009). This conclusion was bolstered by converging methodologies
    corecore