853 research outputs found
EMPATH: A Neural Network that Categorizes Facial Expressions
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain
The primate amygdala in social perception – insights from electrophysiological recordings and stimulation
The role of the amygdala in emotion and social perception has been intensively investigated primarily through studies using functional magnetic resonance imaging (fMRI). Recently, this topic has been examined using single-unit recordings in both humans and monkeys, with a focus on face processing. The findings provide novel insights, including several surprises: amygdala neurons have very long response latencies, show highly nonlinear responses to whole faces, and can be exquisitely selective for very specific parts of faces such as the eyes. In humans, the responses of amygdala neurons correlate with internal states evoked by faces, rather than with their objective features. Current and future studies extend the investigations to psychiatric illnesses such as autism, in which atypical face processing is a hallmark of social dysfunction
Encoding of target detection during visual search by single neurons in the human brain
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance
Evaluation der potenziellen Regeneratschädigung bei der Kallusumformung nach Distraktionsosteogenese der Mandibula: Eine experimentelle Untersuchung am Tiermodell
Zusammenfassung: Ziel: Bei der Korrektur dreidimensionaler Deformitäten des Gesichtsskelettes mit der Distraktionsosteogenese werden Umformungsvorgänge des Regenerates einerseits als Bestandteil des Behandlungsplans, andererseits im Falle eines Verlustes der Kontrolle über den Distraktionsvektor vorgenommen. Die vorliegende Untersuchung hatte das Ziel, die Grenzen der Kallusmanipulation zu beurteilen. Dazu wurden die Auswirkungen komprimierender sowie dehnender Einflüsse am gleichen Regenerat untersucht. Material und Methode: Bei 15Beagle-Hunden wurde mit speziell angefertigten bidirektionalen Distraktoren eine lineare Distraktion von 10mm beidseits im Unterkieferwinkel vorgenommen. Der neu gebildete Kallus wurde in einem Schritt um 20° anguliert, was im vorliegenden Modell einer Verkürzung/Verlängerung von ca. 35% der Ausgangslänge des Regenerates gleichkommt. Die Position des Rotationszentrums erlaubte es, das Regenerat gleichzeitig zu komprimieren und zu dehnen. Die Auswirkungen dieser mechanischen Einflüsse auf die Ossifikation des Regenerates wurden nach 6 bzw. 13Wochen beurteilt und mit einer Kontrollgruppe, bei der lediglich eine lineare Distraktion durchgeführt worden war, verglichen. Ergebnisse: Die radiologischen und histologischen Untersuchungen ergaben keinen statisch signifikanten Unterschied zwischen dem komprimierten und gedehnten Regenerat. Es zeigten sich jedoch im gedehnten Sektor des Kallus Zonen unvollständiger Ossifikation nach 6-wöchiger Konsolidierungszeit. Unter stabilen Verhältnissen wurde die verzögerte Knochenheilung im weiteren Verlauf kompensiert und eine vollständige Ossifikation nach 13Wochen erreicht. Schlussfolgerung: Unter stabilen Verhältnissen kann ein durch Distraktion gebildetes frisches Regenerat in einem beträchtlichen Ausmaß umgeformt werden, ohne die knöcherne Heilung bleibend zu kompromittieren. Die Dehnung des Kallus kann jedoch zu einer Verzögerung oder dem Ausbleiben des Ossifikationsprozesses führen und sollte vermieden werden. Dies lässt sich durch eine Überkorrektur der Regeneratlänge oder durch eine graduelle Angulation während des Distraktionsvorgangs erreiche
Neurons in the human amygdala selective for perceived emotion
The human amygdala plays a key role in recognizing facial emotions and neurons in the monkey and human amygdala respond to the emotional expression of faces. However, it remains unknown whether these responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. We investigated these questions by recording from over 200 single neurons in the amygdalae of 7 neurosurgical patients with implanted depth electrodes. We presented degraded fear and happy faces and asked subjects to discriminate their emotion by button press. During trials where subjects responded correctly, we found neurons that distinguished fear vs. happy emotions as expressed by the displayed faces. During incorrect trials, these neurons indicated the patients’ subjective judgment. Additional analysis revealed that, on average, all neuronal responses were modulated most by increases or decreases in response to happy faces, and driven predominantly by judgments about the eye region of the face stimuli. Following the same analyses, we showed that hippocampal neurons, unlike amygdala neurons, only encoded emotions but not subjective judgment. Our results suggest that the amygdala specifically encodes the subjective judgment of emotional faces, but that it plays less of a role in simply encoding aspects of the image array. The conscious percept of the emotion shown in a face may thus arise from interactions between the amygdala and its connections within a distributed cortical network, a scheme also consistent with the long response latencies observed in human amygdala recordings
Single-Neuron Correlates of Error Monitoring and Post-Error Adjustments in Human Medial Frontal Cortex
Humans can self-monitor errors without explicit feedback, resulting in behavioral adjustments on subsequent trials such as post-error slowing (PES). The error-related negativity (ERN) is a well-established macroscopic scalp EEG correlate of error self-monitoring, but its neural origins and relationship to PES remain unknown. We recorded in the frontal cortex of patients performing a Stroop task and found neurons that track self-monitored errors and error history in dorsal anterior cingulate cortex (dACC) and pre-supplementary motor area (pre-SMA). Both the intracranial ERN (iERN) and error neuron responses appeared first in pre-SMA, and ∼50 ms later in dACC. Error neuron responses were correlated with iERN amplitude on individual trials. In dACC, such error neuron-iERN synchrony and responses of error-history neurons predicted the magnitude of PES. These data reveal a human single-neuron correlate of the ERN and suggest that dACC synthesizes error information to recruit behavioral control through coordinated neural activity
Encoding of target detection during visual search by single neurons in the human brain
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance
The gray matter volume of the amygdala is correlated with the perception of melodic intervals: a voxel-based morphometry study
Music is not simply a series of organized pitches, rhythms, and timbres, it is capable of evoking emotions. In the present study, voxel-based morphometry (VBM) was employed to explore the neural basis that may link music to emotion. To do this, we identified the neuroanatomical correlates of the ability to extract pitch interval size in a music segment (i.e., interval perception) in a large population of healthy young adults (N = 264). Behaviorally, we found that interval perception was correlated with daily emotional experiences, indicating the intrinsic link between music and emotion. Neurally, and as expected, we found that interval perception was positively correlated with the gray matter volume (GMV) of the bilateral temporal cortex. More important, a larger GMV of the bilateral amygdala was associated with better interval perception, suggesting that the amygdala, which is the neural substrate of emotional processing, is also involved in music processing. In sum, our study provides one of first neuroanatomical evidence on the association between the amygdala and music, which contributes to our understanding of exactly how music evokes emotional responses
Single-Unit Responses Selective for Whole Faces in the Human Amygdala
The human amygdala is critical for social cognition from faces, as borne out by impairments in recognizing facial emotion following amygdala lesions and differential activation of the amygdala by faces. Single-unit recordings in the primate amygdala have documented responses selective for faces, their identity, or emotional expression, yet how the amygdala represents face information remains unknown. Does it encode specific features of faces that are particularly critical for recognizing emotions (such as the eyes), or does it encode the whole face, a level of representation that might be the proximal substrate for subsequent social cognition? We investigated this question by recording from over 200 single neurons in the amygdalae of seven neurosurgical patients with implanted depth electrodes. We found that approximately half of all neurons responded to faces or parts of faces. Approximately 20% of all neurons responded selectively only to the whole face. Although responding most to whole faces, these neurons paradoxically responded more when only a small part of the face was shown compared to when almost the entire face was shown. We suggest that the human amygdala plays a predominant role in representing global information about faces, possibly achieved through inhibition between individual facial features
Flexible recruitment of memory-based choice representations by human medial-frontal cortex
Flexibly switching between different tasks is a fundamental human cognitive ability that allows us to make selective use of only the information needed for a given decision. Minxha et al. used single-neuron recordings from patients to understand how the human brain retrieves memories on demand when needed for making a decision and how retrieved memories are dynamically routed in the brain from the temporal to the frontal lobe. When memory was not needed, only medial frontal cortex neural activity was correlated with the task. However, when outcome choices required memory retrieval, frontal cortex neurons were phase-locked to field potentials recorded in the medial temporal lobe. Therefore, depending on demands of the task, neurons in different regions can flexibly engage and disengage their activity patterns
- …