65,339 research outputs found

    Affective games:a multimodal classification system

    Get PDF
    Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in player’s psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation

    Risk factors for chest infection in acute stroke: a prospective cohort study

    Get PDF
    <p><b>Background and Purpose:</b> Pneumonia is a major cause of morbidity and mortality after stroke. We aimed to determine key characteristics that would allow prediction of those patients who are at highest risk for poststroke pneumonia.</p> <p><b>Methods:</b> We studied a series of consecutive patients with acute stroke who were admitted to hospital. Detailed evaluation included the modified National Institutes of Health Stroke Scale; the Abbreviated Mental Test; and measures of swallow, respiratory, and oral health status. Pneumonia was diagnosed by set criteria. Patients were followed up at 3 months after stroke.</p> <p><b>Results:</b> We studied 412 patients, 391 (94.9%) with ischemic stroke and 21 (5.1%) with hemorrhagic stroke; 78 (18.9%) met the study criteria for pneumonia. Subjects who developed pneumonia were older (mean±SD age, 75.9±11.4 vs 64.9±13.9 years), had higher modified National Institutes of Health Stroke Scale scores, a history of chronic obstructive pulmonary disease, lower Abbreviated Mental Test scores, and a higher oral cavity score, and a greater proportion tested positive for bacterial cultures from oral swabs. In binary logistic-regression analysis, independent predictors (P<0.05) of pneumonia were age >65 years, dysarthria or no speech due to aphasia, a modified Rankin Scale score ≥4, an Abbreviated Mental Test score <8, and failure on the water swallow test. The presence of 2 or more of these risk factors carried 90.9% sensitivity and 75.6% specificity for the development of pneumonia.</p> <p><b>Conclusions:</b> Pneumonia after stroke is associated with older age, dysarthria/no speech due to aphasia, severity of poststroke disability, cognitive impairment, and an abnormal water swallow test result. Simple assessment of these variables could be used to identify patients at high risk of developing pneumonia after stroke.</p&gt

    Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features

    Get PDF
    During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6–1.3 Hz), words (1.8–3 Hz), syllables (2.8–4.8 Hz), and phonemes (8–12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13–30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory–motor pathway

    Statistical Models of Reconstructed Phase Spaces for Signal Classification

    Get PDF
    This paper introduces a novel approach to the analysis and classification of time series signals using statistical models of reconstructed phase spaces. With sufficient dimension, such reconstructed phase spaces are, with probability one, guaranteed to be topologically equivalent to the state dynamics of the generating system, and, therefore, may contain information that is absent in analysis and classification methods rooted in linear assumptions. Parametric and nonparametric distributions are introduced as statistical representations over the multidimensional reconstructed phase space, with classification accomplished through methods such as Bayes maximum likelihood and artificial neural networks (ANNs). The technique is demonstrated on heart arrhythmia classification and speech recognition. This new approach is shown to be a viable and effective alternative to traditional signal classification approaches, particularly for signals with strong nonlinear characteristics

    Depression-related difficulties disengaging from negative faces are associated with sustained attention to negative feedback during social evaluation and predict stress recovery

    Get PDF
    The present study aimed to clarify: 1) the presence of depression-related attention bias related to a social stressor, 2) its association with depression-related attention biases as measured under standard conditions, and 3) their association with impaired stress recovery in depression. A sample of 39 participants reporting a broad range of depression levels completed a standard eye-tracking paradigm in which they had to engage/disengage their gaze with/from emotional faces. Participants then underwent a stress induction (i.e., giving a speech), in which their eye movements to false emotional feedback were measured, and stress reactivity and recovery were assessed. Depression level was associated with longer times to engage/disengage attention with/from negative faces under standard conditions and with sustained attention to negative feedback during the speech. These depression-related biases were associated and mediated the association between depression level and self-reported stress recovery, predicting lower recovery from stress after giving the speech

    Speech-based recognition of self-reported and observed emotion in a dimensional space

    Get PDF
    The differences between self-reported and observed emotion have only marginally been investigated in the context of speech-based automatic emotion recognition. We address this issue by comparing self-reported emotion ratings to observed emotion ratings and look at how differences between these two types of ratings affect the development and performance of automatic emotion recognizers developed with these ratings. A dimensional approach to emotion modeling is adopted: the ratings are based on continuous arousal and valence scales. We describe the TNO-Gaming Corpus that contains spontaneous vocal and facial expressions elicited via a multiplayer videogame and that includes emotion annotations obtained via self-report and observation by outside observers. Comparisons show that there are discrepancies between self-reported and observed emotion ratings which are also reflected in the performance of the emotion recognizers developed. Using Support Vector Regression in combination with acoustic and textual features, recognizers of arousal and valence are developed that can predict points in a 2-dimensional arousal-valence space. The results of these recognizers show that the self-reported emotion is much harder to recognize than the observed emotion, and that averaging ratings from multiple observers improves performance
    corecore