442 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The role of facial movements in emotion recognition

    Get PDF
    Most past research on emotion recognition has used photographs of posed expressions intended to depict the apex of the emotional display. Although these studies have provided important insights into how emotions are perceived in the face, they necessarily leave out any role of dynamic information. In this Review, we synthesize evidence from vision science, affective science and neuroscience to ask when, how and why dynamic information contributes to emotion recognition, beyond the information conveyed in static images. Dynamic displays offer distinctive temporal information such as the direction, quality and speed of movement, which recruit higher-level cognitive processes and support social and emotional inferences that enhance judgements of facial affect. The positive influence of dynamic information on emotion recognition is most evident in suboptimal conditions when observers are impaired and/or facial expressions are degraded or subtle. Dynamic displays further recruit early attentional and motivational resources in the perceiver, facilitating the prompt detection and prediction of others’ emotional states, with benefits for social interaction. Finally, because emotions can be expressed in various modalities, we examine the multimodal integration of dynamic and static cues across different channels, and conclude with suggestions for future research

    Differences in configural processing for human versus android dynamic facial expressions

    Get PDF
    Humanlike androids can function as social agents in social situations and in experimental research. While some androids can imitate facial emotion expressions, it is unclear whether their expressions tap the same processing mechanisms utilized in human expression processing, for example configural processing. In this study, the effects of global inversion and asynchrony between facial features as configuration manipulations were compared in android and human dynamic emotion expressions. Seventy-five participants rated (1) angry and happy emotion recognition and (2) arousal and valence ratings of upright or inverted, synchronous or asynchronous, android or human agent dynamic emotion expressions. Asynchrony in dynamic expressions significantly decreased all ratings (except valence in angry expressions) in all human expressions, but did not affect android expressions. Inversion did not affect any measures regardless of agent type. These results suggest that dynamic facial expressions are processed in a synchrony-based configural manner for humans, but not for androids

    Animating observed emotional behaviour: a practice-based investigation comparing three approaches to self-figurative animation

    Get PDF
    This research explores different animation approaches to rendering observed emotional behaviour, through the creation of an animated artefact. It opens with an introduction to the research and the methodology chosen before progressing to a review of academic and practitioner-based literature associated with observed emotional behaviour. Building upon this foundation of literature, the thesis outlines how the artifact was created with a practice based approach drawn from Haseman’s cycle of creation, feedback, reflection and then creation. The main research question is augmented by a series of contributory questions that explore the research through iterations of animation drawn from a base of live action footage of observed emotional behaviour. These exploratory iterations progress though motion capture, rotoscopy and finally freeform animation. The completed artifact and its findings are explored first though a perception study and then a production study. This thesis is based on the investigation and discourse of observed emotional behaviour surrounding the use of animation, specifically, the direct study of the observation of emotional behaviour through the application of animation as a tool of research. It aims to provide a basis of discussion and contribution to knowledge for animation practitioners, theorists and practitioner-researchers seeking to use less performative and exaggerated forms

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Lost in projection – Implicit features experience of 3D architectural forms and their projections

    Get PDF
    The aim of our study was to investigate whether the experience of objects’ implicit features would change if we observe it as a real 3D object or as a photograph or a drawing. In our experiment 46 participants estimated their impression of 10 objects shown in four different presentations. As stimuli, we used 3D objects, their virtual reality models, photographs and drawings from four different viewing directions, created by architecture students. As a measure of implicit features experience we used 12 bipolar adjectives grouped into four factors (attractiveness, regularity, arousal, and calmness) and 3 adjectives forming aesthetic experience factor. Results show significant differences between types of object presentations on four factors of implicit features experience, but not on the aesthetic experience factor. Real 3D objects were experienced as more attractive and calm, while VR presentation showed reduced arousal than other presentation types. On regularity VR and real 3D objects were experienced as same and more regular then drawings and photographs

    ICS Materials. Towards a re-Interpretation of material qualities through interactive, connected, and smart materials.

    Get PDF
    The domain of materials for design is changing under the influence of an increased technological advancement, miniaturization and democratization. Materials are becoming connected, augmented, computational, interactive, active, responsive, and dynamic. These are ICS Materials, an acronym that stands for Interactive, Connected and Smart. While labs around the world are experimenting with these new materials, there is the need to reflect on their potentials and impact on design. This paper is a first step in this direction: to interpret and describe the qualities of ICS materials, considering their experiential pattern, their expressive sensorial dimension, and their aesthetic of interaction. Through case studies, we analyse and classify these emerging ICS Materials and identified common characteristics, and challenges, e.g. the ability to change over time or their programmability by the designers and users. On that basis, we argue there is the need to reframe and redesign existing models to describe ICS materials, making their qualities emerge
    • 

    corecore