1,118 research outputs found

    Decoding face categories in diagnostic subregions of primary visual cortex

    Get PDF
    Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non-diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala)

    Cracking the code of oscillatory activity

    Get PDF
    Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency) code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times) relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response-that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brai

    Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: behavioral and brain evidence

    Get PDF
    Competent social organisms will read the social signals of their peers. In primates, the face has evolved to transmit the organism's internal emotional state. Adaptive action suggests that the brain of the receiver has co-evolved to efficiently decode expression signals. Here, we review and integrate the evidence for this hypothesis. With a computational approach, we co-examined facial expressions as signals for data transmission and the brain as receiver and decoder of these signals. First, we show in a model observer that facial expressions form a lowly correlated signal set. Second, using time-resolved EEG data, we show how the brain uses spatial frequency information impinging on the retina to decorrelate expression categories. Between 140 to 200 ms following stimulus onset, independently in the left and right hemispheres, an information processing mechanism starts locally with encoding the eye, irrespective of expression, followed by a zooming out to processing the entire face, followed by a zooming back in to diagnostic features (e.g. the opened eyes in "fear", the mouth in "happy"). A model categorizer demonstrates that at 200 ms, the left and right brain have represented enough information to predict behavioral categorization performance

    The use of 3D printing in the development of gaseous radiation detectors

    Get PDF
    Fused Deposition Modelling has been used to produce a small, single wire, Iarocci-style drift tube to demonstrate the feasibility of using the Additive Manufacturing technique to produce cheap detectors, quickly. Recent technological developments have extended the scope of Additive Manufacturing, or 3D printing, to the possibility of fabricating Gaseous Radiation Detectors, such as Single Wire Proportional Counters and Time Projection Chambers. 3D printing could allow for the production of customisable, modular detectors; that can be easily created and replaced and the possibility of printing detectors on-site in remote locations and even for outreach within schools. The 3D printed drift tube was printed using Polylactic acid to produce a gas volume in the shape of an inverted triangular prism; base length of 28 mm, height 24.25 mm and tube length 145 mm. A stainless steel anode wire was placed in the centre of the tube, mid-print. P5 gas (95% Argon, 5% Methane) was used as the drift gas and a circuit was built to capacitively decouple signals from the high voltage. The signal rate and average pulse height of cosmic ray muons were measured over a range of bias voltages to characterise and prove correct operation of the printed detector

    Developmental changes in the critical information used for facial expression processing

    Get PDF
    Facial expression recognition skills are known to improve across childhood and adolescence, but the mechanisms driving the development of these important social abilities remain unclear. This study investigates directly whether there are qualitative differences in child and adult processing strategies for these emotional stimuli. With a novel adaptation of the Bubbles reverse-correlation paradigm (Gosselin & Schyns, 2001), we added noise to expressive face stimuli and presented sub-sets of randomly sampled information from each image at different locations and spatial frequency bands across experimental trials. Results from our large developmental sample: 71 young children (6 -9 years), 69 older children (10-13 years) and 54 adults, uniquely reveal flexible profiles of strategic information-use for categorisations of fear, sadness, happiness and anger at all ages. All three groups relied upon a distinct set of key facial features for each of these expressions, with fine-tuning of this diagnostic information (features and spatial frequency) observed across developmental time. Reported variability in the developmental trajectories for different emotional expressions is consistent with the notion of functional links between the refinement of information-use and processing ability
    • …
    corecore