40,647 research outputs found

    Moveable worlds/digital scenographies

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ Intellect Ltd 2010.The mixed reality choreographic installation UKIYO explored in this article reflects an interest in scenographic practices that connect physical space to virtual worlds and explore how performers can move between material and immaterial spaces. The spatial design for UKIYO is inspired by Japanese hanamichi and western fashion runways, emphasizing the research production company's commitment to various creative crossovers between movement languages, innovative wearable design for interactive performance, acoustic and electronic sound processing and digital image objects that have a plastic as well as an immaterial/virtual dimension. The work integrates various forms of making art in order to visualize things that are not in themselves visual, or which connect visual and kinaesthetic/tactile/auditory experiences. The ‘Moveable Worlds’ in this essay are also reflections of the narrative spaces, subtexts and auditory relationships in the mutating matrix of an installation-space inviting the audience to move around and follow its sensorial experiences, drawn near to the bodies of the dancers.Brunel University, the British Council, and the Japan Foundation

    ANGELICA : choice of output modality in an embodied agent

    Get PDF
    The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality

    Spatially augmented audio delivery: applications of spatial sound awareness in sensor-equipped indoor environments

    Get PDF
    Current mainstream audio playback paradigms do not take any account of a user's physical location or orientation in the delivery of audio through headphones or speakers. Thus audio is usually presented as a static perception whereby it is naturally a dynamic 3D phenomenon audio environment. It fails to take advantage of our innate psycho-acoustical perception that we have of sound source locations around us. Described in this paper is an operational platform which we have built to augment the sound from a generic set of wireless headphones. We do this in a way that overcomes the spatial awareness limitation of audio playback in indoor 3D environments which are both location-aware and sensor-equipped. This platform provides access to an audio-spatial presentation modality which by its nature lends itself to numerous cross-dissiplinary applications. In the paper we present the platform and two demonstration applications

    Refining personal and social presence in virtual meetings

    Get PDF
    Virtual worlds show promise for conducting meetings and conferences without the need for physical travel. Current experience suggests the major limitation to the more widespread adoption and acceptance of virtual conferences is the failure of existing environments to provide a sense of immersion and engagement, or of ‘being there’. These limitations are largely related to the appearance and control of avatars, and to the absence of means to convey non-verbal cues of facial expression and body language. This paper reports on a study involving the use of a mass-market motion sensor (Kinect™) and the mapping of participant action in the real world to avatar behaviour in the virtual world. This is coupled with full-motion video representation of participant’s faces on their avatars to resolve both identity and facial expression issues. The outcomes of a small-group trial meeting based on this technology show a very positive reaction from participants, and the potential for further exploration of these concepts

    A Content-Analysis Approach for Exploring Usability Problems in a Collaborative Virtual Environment

    Get PDF
    As Virtual Reality (VR) products are becoming more widely available in the consumer market, improving the usability of these devices and environments is crucial. In this paper, we are going to introduce a framework for the usability evaluation of collaborative 3D virtual environments based on a large-scale usability study of a mixedmodality collaborative VR system. We first review previous literature about important usability issues related to collaborative 3D virtual environments, supplemented with our research in which we conducted 122 interviews after participants solved a collaborative virtual reality task. Then, building on the literature review and our results, we extend previous usability frameworks. We identified twelve different usability problems, and based on the causes of the problems, we grouped them into three main categories: VR environment-, device interaction-, and task-specific problems. The framework can be used to guide the usability evaluation of collaborative VR environments

    Auditory Experiences in Game Transfer Phenomena:

    Get PDF
    This study investigated gamers’ auditory experiences as after effects of playing. This was done by classifying, quantifying, and analysing 192 experiences from 155 gamers collected from online videogame forums. The gamers’ experiences were classified as: (i) auditory imagery (e.g., constantly hearing the music from the game), (ii) inner speech (e.g., completing phrases in the mind), (iii) auditory misperceptions (e.g., confusing real life sounds with videogame sounds), and (iv) multisensorial auditory experiences (e.g., hearing music while involuntary moving the fingers). Gamers heard auditory cues from the game in their heads, in their ears, but also coming from external sources. Occasionally, the vividness of the sound evoked thoughts and emotions that resulted in behaviours and copying strategies. The psychosocial implications of the gamers’ auditory experiences are discussed. This study contributes to the understanding of the effects of auditory features in videogames, and to the phenomenology of non-volitional experiences (e.g., auditory imagery, auditory hallucinations)

    Mixed Reality Architecture: Concept, Construction, Use

    Get PDF
    Mixed Reality Architecture (MRA) dynamically links and overlays physical and virtual spaces. This paper investigates the topology of and the relationships between the components of MRA. As a phenomenon, MRA takes its place in a long history of technologies that have influenced conditions for social interaction as well as the environment we build around us. However, by providing a flexible spatial topology spanning physical and virtual environments it presents new opportunities for social interaction across electronic media. An experimental MRA is described that allowed us to study some of the emerging issues in this field. It provided material for the development of a framework describing virtual and physical spaces, the links between those and the types of mixed reality structure that we can envisage it being possible to design using these elements. We propose that by re-introducing a level of spatiality into communication across physical and virtual environments MRA will support everyday social interaction, and may convert digital communication media from being socially conservative to a more generative form familiar from physical space
    corecore