11 research outputs found

    How We Know It Hurts: Item Analysis of Written Narratives Reveals Distinct Neural Responses to Others' Physical Pain and Emotional Suffering

    Get PDF
    People are often called upon to witness, and to empathize with, the pain and suffering of others. In the current study, we directly compared neural responses to others' physical pain and emotional suffering by presenting participants (n = 41) with 96 verbal stories, each describing a protagonist's physical and/or emotional experience, ranging from neutral to extremely negative. A separate group of participants rated “how much physical pain”, and “how much emotional suffering” the protagonist experienced in each story, as well as how “vivid and movie-like” the story was. Although ratings of Pain, Suffering and Vividness were positively correlated with each other across stories, item-analyses revealed that each scale was correlated with activity in distinct brain regions. Even within regions of the “Shared Pain network” identified using a separate data set, responses to others' physical pain and emotional suffering were distinct. More broadly, item analyses with continuous predictors provided a high-powered method for identifying brain regions associated with specific aspects of complex stimuli – like verbal descriptions of physical and emotional events.United States. Air Force Office of Scientific Research (Office of Naval Research, grant number N000140910845

    Imagery-mediated verbal learning depends on vividness–familiarity interactions: The possible role of dualistic resting state network activity interference

    Get PDF
    Using secondary database analysis, we tested whether the (implicit) familiarity of eliciting noun-cues and the (explicit) vividness of corresponding imagery exerted additive or interactive influences on verbal learning, as measured by the probability of incidental noun recall and image latency times (RTs). Noun-cues with incongruent levels of vividness and familiarity (high/low; low/high, respectively) at encoding were subsequently associated at retrieval with the lowest recall probabilities, while noun-cues related with congruent levels (high/high; low/low) were associated with higher recall probabilities. RTs in the high vividness and high familiarity grouping were significantly faster than all other subsets (low/low, low/high, high/low) which did not significantly differ among each other. The findings contradict: (1) associative theories predicting positive monotonic relationships between memory strength and learning; and (2) non-monotonic plasticity hypothesis (NMPH), aiming at generalizing the non-monotonic relationship between a neuron’s excitation level and its synaptic strength to broad neural networks. We propose a dualistic neuropsychological model of memory consolidation that mimics the global activity in two large resting-state networks (RSNs), the default mode network (DMN) and the task-positive-network (TPN). Based on this model, we suggest that incongruence and congruence between vividness and familiarity reflect, respectively, competition and synergy between DMN and TPN activity. We argue that competition or synergy between these RSNs at the time of stimulus encoding disproportionately influences long term semantic memory consolidation in healthy controls. These findings could assist in developing neurophenomenological markers of core memory deficits currently hypothesized to be shared across multiple psychopathological conditions

    Functional organization of the human amygdala in appetitive learning

    Full text link

    Item-Wise Interindividual Brain-Behavior Correlation in Task Neuroimaging Analysis

    Get PDF
    Brain-behavior correlations are commonly used to explore the associations between the brain and human behavior in cognitive neuroscience studies. There are many critics of the correlation approach, however. Most problems associated with correlation approaches originate in the weak statistical power of traditional correlation procedures (i.e., the mean-wise interindividual brain-behavior correlation). This paper proposes a new correlation procedure, the item-wise interindividual brain-behavior correlation, which enhances statistical power via testing the significance of small correlation coefficients from trials against zero rather than simply pursuing the highest correlation coefficient. The item-wise and mean-wise correlation were compared in simulations and an fMRI experiment on mathematical problem-solving. Simulations show that the item-wise correlation relative to the mean-wise correlation results in higher t-values when signal-to-noise ratio is equal to or larger than 6%. Item-wise correlation displayed more voxels with significant brain-behavior correlation than did mean-wise correlation. Analyses with item-wise (rather than mean-wise) correlation showed significant brain-behavior correlation at the threshold of p < 0.05 corrected. Cross validation showed that odd- and even-ordered trials have greater stability in terms of the item-wise correlation (r = 0.918) than the mean-wise correlation (r = 0.686). The simulations and example analyses altogether demonstrate the effectiveness of the proposed correlation procedure for task neuroimaging studies

    Understanding mixed effects models through data simulation

    Get PDF
    Experimental designs that sample both subjects and stimuli from a larger population need to account for random effects of both subjects and stimuli using mixed effects models. However, much of this research is analyzed using ANOVA on aggregated responses because researchers are not confident specifying and interpreting mixed effects models. The tutorial will explain how to simulate data with random effects structure and analyse the data using linear mixed effects regression (with the lme4 R package), with a focus on interpreting the output in light of the simulated parameters. Data simulation can not only enhance understanding of how these models work, but also enables researchers to perform power calculations for complex designs. All materials associated with this article can be accessed at https://osf.io/3cz2e/

    Art for reward's sake: Visual art recruits the ventral striatum

    Get PDF
    A recent study showed that people evaluate products more positively when they are physically associated with art images than similar non-art images. Neuroimaging studies of visual art have investigated artistic style and esthetic preference but not brain responses attributable specifically to the artistic status of images. Here we tested the hypothesis that the artistic status of images engages reward circuitry, using event-related functional magnetic resonance imaging (fMRI) during viewing of art and non-art images matched for content. Subjects made animacy judgments in response to each image. Relative to non-art images, art images activated, on both subject- and item-wise analyses, reward-related regions: the ventral striatum, hypothalamus and orbitofrontal cortex. Neither response times nor ratings of familiarity or esthetic preference for art images correlated significantly with activity that was selective for art images, suggesting that these variables were not responsible for the art-selective activations. Investigation of effective connectivity, using time-varying, wavelet-based, correlation-purged Granger causality analyses, further showed that the ventral striatum was driven by visual cortical regions when viewing art images but not non-art images, and was not driven by regions that correlated with esthetic preference for either art or non -art images. These findings are consistent with our hypothesis, leading us to propose that the appeal of visual art involves activation of reward circuitry based on artistic status alone and independently of its hedonic value

    Neural correlates of visualizations of concrete and abstract words in preschool children: A developmental embodied approach

    Get PDF
    The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300-699 ms) and late (i.e., 700-1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto- occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a "post-anterior" pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an "ant-posterior" pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations

    Object Rivalry: Competition Between incompatible Representations of the Same Object

    Get PDF
    To understand that an object has changed state during an event, we must represent the `before\u27 and `after\u27 states of that object. Because a physical object cannot be in multiple states at any one moment in time, these `before\u27 and `after\u27 object states are mutually exclusive. In the same way that alternative states of a physical object are mutually exclusive, are cognitive representations of alternative object states also incompatible? If so, comprehension of an object state-change involves interference between the constituent object states. Through a series of functional magnetic resonance imaging experiments, we test the hypothesis that comprehension of object state-change requires the cognitive system to resolve conflict between representationally distinct brain states. We discover that (1) comprehension of an object state-change evokes a neural response in prefrontal cortex that is the same as that found for known forms of conflict, (2) the degree to which an object is described as changing in state predicts the strength of the prefrontal cortex conflict response, (3) the dissimilarity of object states predicts the pattern dissimilarity of visual cortex brain states, and (4) visual cortex pattern dissimilarity predicts the strength of the prefrontal cortex conflict response. Results from these experiments suggest that distinct and incompatible representations of an object compete when representing object state-change. The greater the dissimilarity between described object states, the greater the dissimilarity between rival brain states, and the greater the conflict

    Cognitive Mechanisms Underlying Second Language Listening Comprehension

    Get PDF
    This dissertation research investigates the cognitive mechanisms underlying second language (L2) listening comprehension. I use three types of sentential contexts, congruent, neutral and incongruent, to look at how L2 learners construct meaning in spoken sentence comprehension. The three types of contexts differ in their context predictability. The last word in a congruent context is highly predictable (e.g., Children are more affected by the disease than adults), the last word in a neutral context is likely but not highly predictable (e.g., Children are more affected by the disease than nurses), and the last word in an incongruent context is impossible (e.g., Children are more affected by the disease than chairs). The study shows that, for both native speakers and L2 learners, a consistent context facilitates word recognition. In contrast, an inconsistent context inhibits native speakers’ word recognition but not that of L2 learners. I refer to this new discovery as the facilitation-without-inhibition phenomenon in L2 listening comprehension. Results from follow-up experiments show that this facilitation-without-inhibition phenomenon is a result of insufficient suppression by L2 learners

    Neural Mechanisms for Combinatorial Semantics in Language and Vision: Evidence From FMRI, Patients, and Brain Stimulation

    Get PDF
    Throughout our daily experience, humans make nearly constant use of semantic knowledge. Over the last 20-30 years, the majority of work on the neural basis of semantic memory has examined the representation of semantic categories (e.g., animate versus inanimate). However, a defining aspect of human cognition is the ability to integrate this stored semantic information to form complex combinations of concepts. For example, humans can comprehend “plaid” and “jacket” as separate concepts, but can also effortlessly integrate this information to create the idea of a “plaid jacket.” This process is essential to human cognition, but little work has examined the neural regions that underlie conceptual combination. Many models of semantic memory have proposed that convergence zones, or neural hubs, help to integrate the semantic features of word meaning to form coherent representations from stored semantic knowledge. However, few studies have specifically examined the integrative semantic functions that these high-level hub regions carry out. This thesis presents three experiments that examine lexical-semantic combinatorial processing (as in the “plaid jacket” example above): 1) a study in healthy adults using fMRI, 2) a study in healthy adults using brain stimulation, and 3) a study examining impairments of lexical-semantic integration in patients with neurodegenerative disease. The fourth and final experiment of this thesis examines semantic aspects of combinatorial codes for visual-object representation. This study identifies neural regions that encode the feature combinations that define an object’s meaning. The findings from these four experiments elucidate specific cortical hubs for semantic-feature integration during language comprehension and visual-object processing, and they advance our understanding of the role of heteromodal brain regions in semantic memory
    corecore