4,424 research outputs found

    Both Facts and Feelings: Emotion and News Literacy

    Get PDF
    News literacy education has long focused on the significance of facts, sourcing, and verifiability. While these are critical aspects of news, rapidly developing emotion analytics technologies intended to respond to and even alter digital news audiences’ emotions also demand that we pay greater attention to the role of emotion in news consumption. This essay explores the role of emotion in the “fake news” phenomenon and the implementation of emotion analytics tools in news distribution. I examine the function of emotion in news consumption and the status of emotion within existing news literacy training programs. Finally, I offer suggestions for addressing emotional responses to news with students, including both mindfulness techniques and psychological research on thinking processes

    Why musical memory can be preserved in advanced Alzheimer's disease

    Get PDF
    Musical memory is relatively preserved in Alzheimer's disease and other dementias. In a 7 Tesla functional MRI study employing multi-voxel pattern analysis, Jacobsen et al. identify brain regions encoding long-term musical memory in young healthy controls, and show that these same regions display relatively little atrophy and hypometabolism in patients with Alzheimer's disease.See Clark and Warren (doi:10.1093/brain/awv148) for a scientific commentary on this article. Musical memory is relatively preserved in Alzheimer's disease and other dementias. In a 7 Tesla functional MRI study employing multi-voxel pattern analysis, Jacobsen et al. identify brain regions encoding long-term musical memory in young healthy controls, and show that these same regions display relatively little atrophy and hypometabolism in patients with Alzheimer's disease.See Clark and Warren (doi:10.1093/awv148) for a scientific commentary on this article

    Decoding the consumer’s brain: Neural representations of consumer experience

    Get PDF
    Understanding consumer experience – what consumers think about brands, how they feel about services, whether they like certain products – is crucial to marketing practitioners. ‘Neuromarketing’, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumers’ minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ‘decode’ their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brand’s image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s

    Multimodal Content Analysis for Effective Advertisements on YouTube

    Full text link
    The rapid advances in e-commerce and Web 2.0 technologies have greatly increased the impact of commercial advertisements on the general public. As a key enabling technology, a multitude of recommender systems exists which analyzes user features and browsing patterns to recommend appealing advertisements to users. In this work, we seek to study the characteristics or attributes that characterize an effective advertisement and recommend a useful set of features to aid the designing and production processes of commercial advertisements. We analyze the temporal patterns from multimedia content of advertisement videos including auditory, visual and textual components, and study their individual roles and synergies in the success of an advertisement. The objective of this work is then to measure the effectiveness of an advertisement, and to recommend a useful set of features to advertisement designers to make it more successful and approachable to users. Our proposed framework employs the signal processing technique of cross modality feature learning where data streams from different components are employed to train separate neural network models and are then fused together to learn a shared representation. Subsequently, a neural network model trained on this joint feature embedding representation is utilized as a classifier to predict advertisement effectiveness. We validate our approach using subjective ratings from a dedicated user study, the sentiment strength of online viewer comments, and a viewer opinion metric of the ratio of the Likes and Views received by each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201

    Functional Organization of the Human Brain: How We See, Feel, and Decide.

    Get PDF
    The human brain is responsible for constructing how we perceive, think, and act in the world around us. The organization of these functions is intricately distributed throughout the brain. Here, I discuss how functional magnetic resonance imaging (fMRI) was employed to understand three broad questions: how do we see, feel, and decide? First, high-resolution fMRI was used to measure the polar angle representation of saccadic eye movements in the superior colliculus. We found that eye movements along the superior-inferior visual field are mapped across the medial-lateral anatomy of a subcortical midbrain structure, the superior colliculus (SC). This result is consistent with the topography in monkey SC. Second, we measured the empathic responses of the brain as people watched a hand get painfully stabbed with a needle. We found that if the hand was labeled as belonging to the same religion as the observer, the empathic neural response was heightened, creating a strong ingroup bias that could not be readily manipulated. Third, we measured brain activity in individuals as they made free decisions (i.e., choosing randomly which of two buttons to press) and found the activity within fronto-thalamic networks to be significantly decreased compared to being instructed (forced) to press a particular button. I also summarize findings from several other projects ranging from addiction therapies to decoding visual imagination to how corporations are represented as people. Together, these approaches illustrate how functional neuroimaging can be used to understand the organization of the human brain

    On the encoding of natural music in computational models and human brains

    Get PDF
    This article discusses recent developments and advances in the neuroscience of music to understand the nature of musical emotion. In particular, it highlights how system identification techniques and computational models of music have advanced our understanding of how the human brain processes the textures and structures of music and how the processed information evokes emotions. Musical models relate physical properties of stimuli to internal representations called features, and predictive models relate features to neural or behavioral responses and test their predictions against independent unseen data. The new frameworks do not require orthogonalized stimuli in controlled experiments to establish reproducible knowledge, which has opened up a new wave of naturalistic neuroscience. The current review focuses on how this trend has transformed the domain of the neuroscience of music

    Decoding negative affect personality trait from patterns of brain activation to threat stimuli

    Get PDF
    INTRODUCTION: Pattern recognition analysis (PRA) applied to functional magnetic resonance imaging (fMRI) has been used to decode cognitive processes and identify possible biomarkers for mental illness. In the present study, we investigated whether the positive affect (PA) or negative affect (NA) personality traits could be decoded from patterns of brain activation in response to a human threat using a healthy sample. METHODS: fMRI data from 34 volunteers (15 women) were acquired during a simple motor task while the volunteers viewed a set of threat stimuli that were directed either toward them or away from them and matched neutral pictures. For each participant, contrast images from a General Linear Model (GLM) between the threat versus neutral stimuli defined the spatial patterns used as input to the regression model. We applied a multiple kernel learning (MKL) regression combining information from different brain regions hierarchically in a whole brain model to decode the NA and PA from patterns of brain activation in response to threat stimuli. RESULTS: The MKL model was able to decode NA but not PA from the contrast images between threat stimuli directed away versus neutral with a significance above chance. The correlation and the mean squared error (MSE) between predicted and actual NA were 0.52 (p-value=0.01) and 24.43 (p-value=0.01), respectively. The MKL pattern regression model identified a network with 37 regions that contributed to the predictions. Some of the regions were related to perception (e.g., occipital and temporal regions) while others were related to emotional evaluation (e.g., caudate and prefrontal regions). CONCLUSION: These results suggest that there was an interaction between the individuals' NA and the brain response to the threat stimuli directed away, which enabled the MKL model to decode NA from the brain patterns. To our knowledge, this is the first evidence that PRA can be used to decode a personality trait from patterns of brain activation during emotional contexts

    Spatially generalizable representations of facial expressions: Decoding across partial face samples

    Get PDF
    A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g. eye region Vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' versus 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g. STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion

    Decoding Unattended Fearful Faces with Whole-Brain Correlations: An Approach to Identify Condition-Dependent Large-Scale Functional Connectivity

    Get PDF
    Processing of unattended threat-related stimuli, such as fearful faces, has been previously examined using group functional magnetic resonance (fMRI) approaches. However, the identification of features of brain activity containing sufficient information to decode, or “brain-read”, unattended (implicit) fear perception remains an active research goal. Here we test the hypothesis that patterns of large-scale functional connectivity (FC) decode the emotional expression of implicitly perceived faces within single individuals using training data from separate subjects. fMRI and a blocked design were used to acquire BOLD signals during implicit (task-unrelated) presentation of fearful and neutral faces. A pattern classifier (linear kernel Support Vector Machine, or SVM) with linear filter feature selection used pair-wise FC as features to predict the emotional expression of implicitly presented faces. We plotted classification accuracy vs. number of top N selected features and observed that significantly higher than chance accuracies (between 90–100%) were achieved with 15–40 features. During fearful face presentation, the most informative and positively modulated FC was between angular gyrus and hippocampus, while the greatest overall contributing region was the thalamus, with positively modulated connections to bilateral middle temporal gyrus and insula. Other FCs that predicted fear included superior-occipital and parietal regions, cerebellum and prefrontal cortex. By comparison, patterns of spatial activity (as opposed to interactivity) were relatively uninformative in decoding implicit fear. These findings indicate that whole-brain patterns of interactivity are a sensitive and informative signature of unattended fearful emotion processing. At the same time, we demonstrate and propose a sensitive and exploratory approach for the identification of large-scale, condition-dependent FC. In contrast to model-based, group approaches, the current approach does not discount the multivariate, joint responses of multiple functional connections and is not hampered by signal loss and the need for multiple comparisons correction
    corecore