1,436 research outputs found

    Virtual worlds for education: methodology, interaction and evaluation

    Get PDF
    2011 - 2012When students arrive in the classroom they expect to be involved in immersive, fun and challenging learning experiences. There is a high risk that they become quickly bored by the traditional instructional methods. The technological evolution offers a great variety of sophisticated interactive devices and applications that can be combined with innovative learning approaches to enhance study efficiency during the learning process. 3D immersive multi-user Virtual Worlds (VWs) are increasingly becoming popular and accessible to wide public due to the advances in computational power graphics and network bandwidth also connected with reduced costs. As a consequence, it is possible to offer more engaging user experiences. This is particularly true in the learning sector, where an increasing interest is worldwide rising towards three-dimensional (3D) VWs and new interaction modalities to which young digital native people are accustomed to. Researches on the educational value of VWs have revealed their potential as learning platforms. However, further studies are always needed in order to assess their effectiveness, satisfactorily and social engagement not only in the general didactic use of the environment, but also for each specific learning subjects, activities and modality. The main challenge is to well exploit VW features and determine learning approaches and interaction modalities in which the didactic actions present added value with respect to traditional education. Indeed, educational VW activities are evolving from the early ones based only on information displaying towards simulated laboratories and new interaction modalities. The main objective of this thesis is to propose new learning methodologies in Virtual Worlds, also experimenting new interaction modalities and evaluating the effectiveness of the support provided. To this aim we first investigate how effectively a 3D city-building game supports the learning of the waste disposal practice and promotes behavior change. The game is one of the results of a research project funded by Regione Campania and is addressed to primary school children. A deep analysis of the didactic methodologies adopted worldwide has been performed to propose a reputation-based learning approach based on collaborative, competitive and individual activities. Also, the effectiveness of the proposed approach has been evaluated. The didactic opportunities offered by VWs when considering new interaction approaches are also investigated. Indeed, if for the last four decades keyboard and mouse have been the primary means for interacting with computers, recently, the availability of greater processing power, wider memories, cameras, and sensors make it possible to introduce new interaction modalities in commonly used software. Gestural interfaces offer new interaction modalities that the primary school children known well and may result accepted also for higher students. To assess the potentiality of this new interaction approach during learning activities we selected Geography as subject, since there is a decreasing interest of the students towards this topic. To this aim the GeoFly system supporting the Geography learning based on a Virtual Globe and on the interaction modalities offered by Microsoft Kinect has been developed. GeoFly is designed for elementary school level Geography students. It enables the exploration of the World by flying, adopting the bird (or aeroplane) metaphor. It also enables the teacher to create learning trips by associating to specific places images, text and videos, to develop learning activities concerning geographically situated scenarios. The proposed approach has been evaluated through a controlled experiment aiming at assessing the effect of the adoption of GeoFly on both the students' attitude towards learning Geography and also on their knowledge. [edited by author]XI n.s

    A multimodal framework for interactive sonification and sound-based communication

    Get PDF

    Modelling the relationship between gesture motion and meaning

    Get PDF
    There are many ways to say “Hello,” be it a wave, a nod, or a bow. We greet others not only with words, but also with our bodies. Embodied communication permeates our interactions. A fist bump, thumbs-up, or pat on the back can be even more meaningful than hearing “good job!” A friend crossing their arms with a scowl, turning away from you, or stiffening up can feel like a harsh rejection. Social communication is not exclusively linguistic, but is a multi-sensory affair. It’s not that communication without these bodily cues is impossible, but it is impoverished. Embodiment is a fundamental human experience. Expressing ourselves through our bodies provides a powerful channel through which we express a plethora of meta-social information. And integral to communication, expression, and social engagement is our utilization of conversational gesture. We use gestures to express extra-linguistic information, to emphasize our point, and to embody mental and linguistic metaphors that add depth and color to social interaction. The gesture behaviour of virtual humans when compared to human-human conversation is limited, depending on the approach taken to automate performances of these characters. The generation of nonverbal behaviour for virtual humans can be approximately classified as either: 1) data-driven approaches that learn a mapping from aspects of the verbal channel, such as prosody, to gestures; or 2) rule bases approaches that are often tailored by designers for specific applications. This thesis is an interdisciplinary exploration that bridges these two approaches, and brings data-driven analyses to observational gesture research. By marrying a rich history of gesture research in behavioral psychology with data-driven techniques, this body of work brings rigorous computational methods to gesture classification, analysis, and generation. It addresses how researchers can exploit computational methods to make virtual humans gesture with the same richness, complexity, and apparent effortlessness as you and I. Throughout this work the central focus is on metaphoric gestures. These gestures are capable of conveying rich, nuanced, multi-dimensional meaning, and raise several challenges in their generation, including establishing and interpreting a gesture’s communicative meaning, and selecting a performance to convey it. As such, effectively utilizing these gestures remains an open challenge in virtual agent research. This thesis explores how metaphoric gestures are interpreted by an observer, how one can generate such rich gestures using a mapping between utterance meaning and gesture, as well as how one can use data driven techniques to explore the mapping between utterance and metaphoric gestures. The thesis begins in Chapter 1 by outlining the interdisciplinary space of gesture research in psychology and generation in virtual agents. It then presents several studies that address presupposed assumptions raised about the need for rich, metaphoric gestures and the risk of false implicature when gestural meaning is ignored in gesture generation. In Chapter 2, two studies on metaphoric gestures that embody multiple metaphors argue three critical points that inform the rest of the thesis: that people form rich inferences from metaphoric gestures, these inferences are informed by cultural context and, more importantly, that any approach to analyzing the relation between utterance and metaphoric gesture needs to take into account that multiple metaphors may be conveyed by a single gesture. A third study presented in Chapter 3 highlights the risk of false implicature and discusses this in the context of current subjective evaluations of the qualitative influence of gesture on viewers. Chapters 4 and 5 then present a data-driven analysis approach to recovering an interpretable explicit mapping from utterance to metaphor. The approach described in detail in Chapter 4 clusters gestural motion and relates those clusters to the semantic analysis of associated utterance. Then, Chapter 5 demonstrates how this approach can be used both as a framework for data-driven techniques in the study of gesture as well as form the basis of a gesture generation approach for virtual humans. The framework used in the last two chapters ties together the main themes of this thesis: how we can use observational behavioral gesture research to inform data-driven analysis methods, how embodied metaphor relates to fine-grained gestural motion, and how to exploit this relationship to generate rich, communicatively nuanced gestures on virtual agents. While gestures show huge variation, the goal of this thesis is to start to characterize and codify that variation using modern data-driven techniques. The final chapter of this thesis reflects on the many challenges and obstacles the field of gesture generation continues to face. The potential for applications of Virtual Agents to have broad impacts on our daily lives increases with the growing pervasiveness of digital interfaces, technical breakthroughs, and collaborative interdisciplinary research efforts. It concludes with an optimistic vision of applications for virtual agents with deep models of non-verbal social behaviour and their potential to encourage multi-disciplinary collaboration

    Designing From Listening: Embodied Experience and Sonic Interactions

    Get PDF
    An understanding of the richness of people’s sonic experience can lead to the creation of novel methods for informing design practices. One of the challenges in Sonic Interaction Design (SID) is to deal with the complexity of the “sonic”: its phenomenon, the interactions it creates, its social and cultural contexts. To tackle this challenge, this thesis investigates how we can draw upon people’s everyday sonic experience, particularly listening and remembering sound, to design interactions using body movement, digital sound processing and embodied technologies. Firstly, the research analyses how sound has been studied in its phenomenological, cultural and social aspects in fields such as Sound Studies and Embodied Sound Cognition. Secondly, it involves users in the process of designing sonic interactions, with a user study about gestural-sound relationships during active control of digital sound, and a series of participatory design workshops which draws upon people’s sonic experience for imagining interactions with sound. The thesis provides four main contributions. The first is Retro-Active Listening, a concept which draws attention to sounds heard in the past by remembering listening to them. The second is the Sonic Incident, a technique for SID workshops, which allows designers to explore participants’ past experiences of listening. The third is the Gestural Sound Toolkit, which enables designers to rapidly prototype interactive sound mappings based on human movement. The final contribution is three models for designing embodied sonic interactions. These comprise (1) Substitution, in which users’ movements substitute the cause of the sound, (2) Conduction, where users’ movements have a semantic relationship with the sound, and (3) Manipulation, in which users’ movements manipulate the sound. These contributions help to build a framework for design that addresses lesser-explored matters in SID, such as embodiment and contextual aspects of sound, which are potentially relevant for users

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Interactive Spaces Natural interfaces supporting gestures and manipulations in interactive spaces

    Get PDF
    This doctoral dissertation focuses on the development of interactive spaces through the use of natural interfaces based on gestures and manipulative actions. In the real world people use their senses to perceive the external environment and they use manipulations and gestures to explore the world around them, communicate and interact with other individuals. From this perspective the use of natural interfaces that exploit the human sensorial and explorative abilities helps filling the gap between physical and digital world. In the first part of this thesis we describe the work made for improving interfaces and devices for tangible, multi touch and free hand interactions. The idea is to design devices able to work also in uncontrolled environments, and in situations where control is mostly of the physical type where even the less experienced users can express their manipulative exploration and gesture communication abilities. We also analyze how it can be possible to mix these techniques to create an interactive space, specifically designed for teamwork where the natural interfaces are distributed in order to encourage collaboration. We then give some examples of how these interactive scenarios can host various types of applications facilitating, for instance, the exploration of 3D models, the enjoyment of multimedia contents and social interaction. Finally we discuss our results and put them in a wider context, focusing our attention particularly on how the proposed interfaces actually improve people’s lives and activities and the interactive spaces become a place of aggregation where we can pursue objectives that are both personal and shared with others

    Gesture in Automatic Discourse Processing

    Get PDF
    Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning.My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract.These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features -- extracted automatically from video -- yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment
    • …
    corecore