1,967 research outputs found

    Understanding and Comparing Deep Neural Networks for Age and Gender Classification

    Full text link
    Recently, deep neural networks have demonstrated excellent performances in recognizing the age and gender on human face images. However, these models were applied in a black-box manner with no information provided about which facial features are actually used for prediction and how these features depend on image preprocessing, model initialization and architecture choice. We present a study investigating these different effects. In detail, our work compares four popular neural network architectures, studies the effect of pretraining, evaluates the robustness of the considered alignment preprocessings via cross-method test set swapping and intuitively visualizes the model's prediction strategies in given preprocessing conditions using the recent Layer-wise Relevance Propagation (LRP) algorithm. Our evaluations on the challenging Adience benchmark show that suitable parameter initialization leads to a holistic perception of the input, compensating artefactual data representations. With a combination of simple preprocessing steps, we reach state of the art performance in gender recognition.Comment: 8 pages, 5 figures, 5 tables. Presented at ICCV 2017 Workshop: 7th IEEE International Workshop on Analysis and Modeling of Faces and Gesture

    Methods for Interpreting and Understanding Deep Neural Networks

    Full text link
    This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. It introduces some recently proposed techniques of interpretation, along with theory, tricks and recommendations, to make most efficient use of these techniques on real data. It also discusses a number of practical applications.Comment: 14 pages, 10 figure

    Modelling, Classification and Synthesis of Facial Expressions

    Get PDF
    The field of computer vision endeavours to develop automatic approaches to the interpretation of images from the real world. Over the past number of decades researchers within this field have created systems specifically for the automatic analysis of facial expression. The most successful of these approaches draw on the tools from behavioural science. In this chapter we examine facial expression analysis from both a behavioural science and a computer vision perspective. First we will provide details of the principal approach used in behavioural science to analyze facial expressions. This will include an overview of the evolution of facial expression analysis, where we introduce the field of facial expression analysis with Darwin’s initial findings (Darwin, 1872). We then go on to show how his findings were confirmed nearly 100 years later by Ekman et al. (Ekman et al., 1969). Following on from this we provide details of recent works investigating the appearance and dynamics of facial expressions

    Seeing Seeing

    Get PDF
    I argue that we can visually perceive others as seeing agents. I start by characterizing perceptual processes as those that are causally controlled by proximal stimuli. I then distinguish between various forms of visual perspective-taking, before presenting evidence that most of them come in perceptual varieties. In doing so, I clarify and defend the view that some forms of visual perspective-taking are “automatic”—a view that has been marshalled in support of dual-process accounts of mindreading

    Machine Learning and Notions of the Image

    Get PDF

    Physiology and neuroanatomy of emotional reactivity in frontotemporal dementia

    Get PDF
    ABSTRACT AND SUMMARY OF EXPERIMENTAL FINDINGS The frontotemporal dementias (FTD) are a heterogeneous group of neurodegenerative diseases that cause variable profiles of fronto-insulo-temporal network disintegration. Loss of empathy and dysfunctional social interaction are a leading features of FTD and major determinants of care burden, but remain poorly understood and difficult to measure with conventional neuropsychological instruments. Building on a large body of work in the healthy brain showing that embodied responses are important components of emotional responses and empathy, I performed a series of experiments to examine the extent to which the induction and decoding of somatic physiological responses to the emotions of others are degraded in FTD, and to define the underlying neuroanatomical changes responsible for these deficits. I systematically studied a range of modalities across the entire syndromic spectrum of FTD, including daily life emotional sensitivity, the cognitive categorisation of emotions, interoceptive accuracy, automatic facial mimicry, autonomic responses, and structural and functional neuroanatomy to deconstruct aberrant emotional reactivity in these diseases. My results provide proof of principle for the utility of physiological measures in deconstructing complex socioemotional symptoms and suggest that these warrant further investigation as clinical biomarkers in FTD. Chapter 3: Using a heartbeat counting task, I found that interoceptive accuracy is impaired in semantic variant primary progressive aphasia, but correlates with sensitivity to the emotions of others across FTD syndromes. Voxel based morphometry demonstrated that impaired interoceptive accuracy correlates with grey matter volume in anterior cingulate, insula and amygdala. Chapter 4: Using facial electromyography to index automatic imitation, I showed that mimicry of emotional facial expressions is impaired in the behavioural and right temporal variants of FTD. Automatic imitation predicted correct identification of facial emotions in healthy controls and syndromes focussed on the frontal lobes and insula, but not in syndromes focussed on the temporal lobes, suggesting that automatic imitation aids emotion recognition only when social concepts and semantic stores are intact. Voxel based morphometry replicated previously identified neuroanatomical correlates of emotion identification ability, while automatic imitation was associated with grey matter volume in a visuomotor network including primary visual and motor cortices, visual motion area (MT/V5) and supplementary motor cortex. Chapter 5: By recording heart rate during viewing of facial emotions, I showed that the normal cardiac reactivity to emotion is impaired in FTD syndromes with fronto-insular atrophy (behavioural variant FTD and nonfluent variant primary progressive aphasia) but not in syndromes focussed on the temporal lobes (right temporal variant FTD and semantic variant primary progressive aphasia). Unlike automatic imitation, cardiac reactivity dissociated from emotion identification ability. Voxel based morphometry revealed grey matter correlates of cardiac reactivity in anterior cingulate, insula and orbitofrontal cortex. Chapter 6: Subjects viewed videos of facial emotions during fMRI scanning, with concomitant recording of heart rate and pupil size. I identified syndromic profiles of reduced activity in posterior face responsive regions including posterior superior temporal sulcus and fusiform face area. Emotion identification ability was predicted by activity in more anterior areas including anterior cingulate, insula, inferior frontal gyrus and temporal pole. Autonomic reactivity related to activity in both components of the central autonomic control network and regions responsible for processing the sensory properties of the stimuli
    • …
    corecore