14,561 research outputs found
First impressions: A survey on vision-based apparent personality trait analysis
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.Peer ReviewedPostprint (author's final draft
What does not happen: quantifying embodied engagement using NIMI and self-adaptors
Previous research into the quantification of embodied intellectual and emotional engagement using non-verbal movement parameters has not yielded consistent results across different studies. Our research introduces NIMI (Non-Instrumental Movement Inhibition) as an alternative parameter. We propose that the absence of certain types of possible movements can be a more holistic proxy for cognitive engagement with media (in seated persons) than searching for the presence of other movements. Rather than analyzing total movement as an indicator of engagement, our research team distinguishes between instrumental movements (i.e. physical movement serving a direct purpose in the given situation) and non-instrumental movements, and investigates them in the context of the narrative rhythm of the stimulus. We demonstrate that NIMI occurs by showing viewersâ movement levels entrained (i.e. synchronised) to the repeating narrative rhythm of a timed computer-presented quiz. Finally, we discuss the role of objective metrics of engagement in future context-aware analysis of human behaviour in audience research, interactive media and responsive system and interface design
The influence of television stories on narrative abilities in children
This research explores the narrative abilities demonstrated by children aged between 8 and 12 in the production of television stories. The results reveal that not all television stories viewed by children foster the informal education process. One type of story, termed narrativizing, enables children to produce coherent stories which clearly articulate the causal, temporal and motivational relations, as well as the means-end structures, the proximal relations of the intrigue and the distal relations of the plot. Other television stories, denarrativizing stories, tend to induce disarrangements and incoherence at all structural levels of the stories produced by children. This in turn hampers the development of their narrative abilities, which are necessary to the correct development of narrative thought. These results indicate the need to exercise social control over this latter type of fictional television narrative, to which children are exposed throughout their development within the framework of informal education.University of the Basque Country (UPV/EHU), EHU 13/65
Universidad del PaĂs Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), GIU 15/14
Universidad del PaĂs Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), UFI 11/04
MINECO. Ministerio de EconomĂa y Competitividad, BES-2015-071923
Fondo Social Europeo, BES-2015-07192
Time-delay neural network for continuous emotional dimension prediction from facial expression sequences
"(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction. One of the main challenges is modeling the dynamics that characterize naturalistic expressions. In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos. In the first stage, traditional regression methods are used to classify each individual video frame, while in the second stage, a Time-Delay Neural Network (TDNN) is proposed to model the temporal relationships between
consecutive predictions. The two-stage approach separates the emotional state dynamics modeling from an individual emotional state prediction step based on input features. In doing so, the temporal information used by the TDNN is not biased by the high variability between features of consecutive frames and allows the network to more easily exploit the slow changing dynamics between emotional states. The system was fully tested and evaluated on three different facial expression video datasets. Our experimental results demonstrate that the use of a two-stage approach combined with the TDNN to take into account previously classified frames significantly improves the overall performance of continuous emotional state estimation in naturalistic
facial expressions. The proposed approach has won the affect recognition sub-challenge of the third international Audio/Visual Emotion Recognition Challenge (AVEC2013)1
Avatar actors
In this text I wish to discuss, as well as illustrate through pictorial examples, how the Live Visuals of three dimensional online virtual worlds may be leading us into participatory and collaborative Play states during which we appear to become the creators as well as the actors of what may also be described as our own real-time cinematic output.
One of the most compelling of these stages may be three dimensional, online virtual worlds in which avatars create and enact their own tales and conceptions, effectively bringing forth live, participatory cinema through Play
Bringing tabletop technologies to kindergarten children
Taking computer technology away from the desktop and into a more physical, manipulative space, is known that provide many benefits and is generally considered to result in a system that is easier to learn and more natural to use. This paper describes a design solution that allows kindergarten children to take the benefits of the new pedagogical possibilities that tangible interaction and tabletop technologies offer for manipulative learning. After analysis of children's cognitive and psychomotor skills, we have designed and tuned a prototype game that is suitable for children aged 3 to 4 years old. Our prototype uniquely combines low cost tangible interaction and tabletop technology with tutored learning. The design has been based on the observation of children using the technology, letting them freely play with the application during three play sessions. These observational sessions informed the design decisions for the game whilst also confirming the children's enjoyment of the prototype
Recommended from our members
Spring School on Language, Music, and Cognition: Organizing Events in Time
The interdisciplinary spring school âLanguage, music, and cognition: Organizing events in timeâ was held from February 26 to March 2, 2018 at the Institute of Musicology of the University of Cologne. Language, speech, and music as events in time were explored from different perspectives including evolutionary biology, social cognition, developmental psychology, cognitive neuroscience of speech, language, and communication, as well as computational and biological approaches to language and music. There were 10 lectures, 4 workshops, and 1 student poster session.
Overall, the spring school investigated language and music as neurocognitive systems and focused on a mechanistic approach exploring the neural substrates underlying musical, linguistic, social, and emotional processes and behaviors. In particular, researchers approached questions concerning cognitive processes, computational procedures, and neural mechanisms underlying the temporal organization of language and music, mainly from two perspectives: one was concerned with syntax or structural representations of language and music as neurocognitive systems (i.e., an intrapersonal perspective), while the other emphasized social interaction and emotions in their communicative function (i.e., an interpersonal perspective). The spring school not only acted as a platform for knowledge transfer and exchange but also generated a number of important research questions as challenges for future investigations
Directional adposition use in English, Swedish and Finnish
Directional adpositions such as to the left of describe where a Figure is in relation to a Ground. English and Swedish directional adpositions refer to the location of a Figure in relation to a Ground, whether both are static or in motion. In contrast, the Finnish directional adpositions edellÀ (in front of) and jÀljessÀ (behind) solely describe the location of a moving Figure in relation to a moving Ground (Nikanne, 2003).
When using directional adpositions, a frame of reference must be assumed for interpreting the meaning of directional adpositions. For example, the meaning of to the left of in English can be based on a relative (speaker or listener based) reference frame or an intrinsic (object based) reference frame (Levinson, 1996). When a Figure and a Ground are both in motion, it is possible for a Figure to be described as being behind or in front of the Ground, even if neither have intrinsic features. As shown by Walker (in preparation), there are good reasons to assume that in the latter case a motion based reference frame is involved. This means that if Finnish speakers would use edellÀ (in front of) and jÀljessÀ (behind) more frequently in situations where both the Figure and Ground are in motion, a difference in reference frame use between Finnish on one hand and English and Swedish on the other could be expected.
We asked native English, Swedish and Finnish speakersâ to select adpositions from a language specific list to describe the location of a Figure relative to a Ground when both were shown to be moving on a computer screen. We were interested in any differences between Finnish, English and Swedish speakers.
All languages showed a predominant use of directional spatial adpositions referring to the lexical concepts TO THE LEFT OF, TO THE RIGHT OF, ABOVE and BELOW. There were no differences between the languages in directional adpositions use or reference frame use, including reference frame use based on motion.
We conclude that despite differences in the grammars of the languages involved, and potential differences in reference frame system use, the three languages investigated encode Figure location in relation to Ground location in a similar way when both are in motion.
Levinson, S. C. (1996). Frames of reference and Molyneuxâs question: Crosslingiuistic evidence. In P. Bloom, M.A. Peterson, L. Nadel & M.F. Garrett (Eds.) Language and Space (pp.109-170). Massachusetts: MIT Press.
Nikanne, U. (2003). How Finnish postpositions see the axis system. In E. van der Zee & J. Slack (Eds.), Representing direction in language and space. Oxford, UK: Oxford University Press.
Walker, C. (in preparation). Motion encoding in language, the use of spatial locatives in a motion context. Unpublished doctoral dissertation, University of Lincoln, Lincoln. United Kingdo
- âŠ