1,199 research outputs found
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions
Perceptions of Facial Expressions of Emotion in Autism Spectrum Disorders: Reading the “minds eye” Using Reverse Correlation
One of the “primary social deficits” of Autism Spectrum Disorders (ASDs) is understanding the emotions of others, yet current literature is inconclusive as to whether individuals with ASD perceive basic facial expressions of emotion differently from typically developed (TD) individuals [Simmons, et al. 2009, Vision Research, 49, 12705-2739] and, if so, which specific emotions are confused
Cracking the code of oscillatory activity
Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency) code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times) relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response-that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brai
The use of 3D printing in the development of gaseous radiation detectors
Fused Deposition Modelling has been used to produce a small, single wire, Iarocci-style drift tube to demonstrate the feasibility of using the Additive Manufacturing technique to produce cheap detectors, quickly. Recent technological developments have extended the scope of Additive Manufacturing, or 3D printing, to the possibility of fabricating Gaseous Radiation Detectors, such as Single Wire Proportional Counters and Time Projection Chambers. 3D printing could allow for the production of customisable, modular detectors; that can be easily created and replaced and the possibility of printing detectors on-site in remote locations and even for outreach within schools.
The 3D printed drift tube was printed using Polylactic acid to produce a gas volume in the shape of an inverted triangular prism; base length of 28 mm, height 24.25 mm and tube length 145 mm. A stainless steel anode wire was placed in the centre of the tube, mid-print. P5 gas (95% Argon, 5% Methane) was used as the drift gas and a circuit was built to capacitively decouple signals from the high voltage. The signal rate and average pulse height of cosmic ray muons were measured over a range of bias voltages to characterise and prove correct operation of the printed detector
Parametric study of EEG sensitivity to phase noise during face processing
<b>Background: </b>
The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model.
<b>Results: </b>
Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces.
<b>Conclusion: </b>
Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses
- …