1,802 research outputs found

    Investigating sound intensity gradients as feedback for embodied learning

    Get PDF
    This paper explores an intensity-based approach to sound feedback in systems for embodied learning. We describe a theoretical framework, design guidelines, and the implementation of and results from an informant workshop. The specific context of embodied activity is considered in light of the challenges of designing meaningful sound feedback, and a design approach is shown to be a generative way of uncovering significant sound design patterns. The exploratory workshop offers preliminary directions and design guidelines for using intensity-based ambient sound display in interactive learning environments. The value of this research is in its contribution towards the development of a cohesive and ecologically valid model for using audio feedback in systems, which can guide embodied interaction. The approach presented here suggests ways that multi-modal auditory feedback can support interactive collaborative learning and problem solving

    Understanding aural fluency in auditory display design for ambient intelligent environments

    Get PDF
    This paper presents the design and some evaluation results from the auditory display model of an ambient intelligent game named socio-ec(h)o. socio-ec(h)o is played physically by a team of four, and displays information via a responsive environment of light and sound. Based on a study of 56 participants involving both qualitative and preliminary quantitative analysis, we present our findings to date as they relate to the auditory display model, future directions and implications. Based on our design and evaluation experience we begin building a theoretical understanding for the unique requirements of informative sonic displays in ambient intelligent and ubiquitous computing systems. We develop and discuss the emerging research concept of aural fluency in ambient intelligent settings

    Making sense of group interaction in an ambient intelligent environment for physical play

    Get PDF
    This paper presents the results of a study on group interaction with a prototype known as socio-ec(h)o. socioec(h)o explores the design of sensing and display, user modeling, and interaction in an embedded interaction system utilizing a game structure. Our study involved the playing of our prototype system by thirty-six (36) participants grouped into teams of four (4). Our aim was to determine heuristics that we could use to further design the interaction and user model approaches for group and embodied interaction systems. We analyzed group interaction and performance based on factors of team cohesion and goal focus. We found that with our system, these factors alone could not explain performance. However, when transitions in the degrees of each factor, i.e. high, medium or low are considered, a clearer picture for performance emerges. The significance of the results is that they describe recognizable factors for positive group interaction

    AmbientSonic Map: Towards a new conceptualization of sound design for games

    Get PDF
    This paper presents an overview of the main features of sound design for games, and argues for a new conceptualization of it, beginning with a closer look at the role of sound as feedback for gameplay. The paper then proposes and details a new approach for sound feedback in games, which provides ambient, intensity-based sonic display that not only responds to, but also guides the player towards the solution of the game. A pilot study and leading outcomes from it are presented, in the hopes of laying a foundation for future investigations into this type of sonic feedback

    Investigating Sound Intensity Gradients as Feedback for Embodied Learning

    Get PDF
    This paper explores an intensity-based approach to sound feedback in systems for embodied learning. We describe a theoretical framework, design guidelines, and the implementation of and results from an informant workshop. The specific context of embodied activity is considered in light of the challenges of designing meaningful sound feedback, and a design approach is shown to be a generative way of uncovering significant sound design patterns. The exploratory workshop offers preliminary directions and design guidelines for using intensity-based ambient sound display in interactive learning environments. The value of this research is in its contribution towards the development of a cohesive and ecologically valid model for using audio feedback in systems, which can guide embodied interaction. The approach presented here suggests ways that multi-modal auditory feedback can support interactive collaborative learning and problem solving

    Understanding Aural Fluency in Auditory Display Design for Ambient Intelligent Environments

    Get PDF
    Presented at the 14th International Conference on Auditory Display (ICAD2008) on June 24-27, 2008 in Paris, France.This paper presents the design and some evaluation results from the auditory display model of an ambient intelligent game named socio-ec(h)o. socio-ec(h)o is played physically by a team of four, and displays information via a responsive environment of light and sound. Based on a study of 56 participants involving both qualitative and preliminary quantitative analysis, we present our findings to date as they relate to the auditory display model, future directions and implications. Based on our design and evaluation experience we begin building a theoretical understanding for the unique requirements of informative sonic displays in ambient intelligent and ubiquitous computing systems. We develop and discuss the emerging research concept of aural fluency in ambient intelligent settings

    Embodied Cognition In Auditory Display

    Get PDF
    Presented at the 19th International Conference on Auditory Display (ICAD2013) on July 6-9, 2013 in Lodz, Poland.This paper makes a case for the use of an embodied cognition framework, based on embodied schemata and cross-domain mappings, in the design of auditory display. An overview of research that relates auditory display with embodied cognition is provided to support such a framework. It then describes research efforts towards the development this framework. By designing to support human cognitive competencies that are bound up with meaning making, it is hoped to open the door to the creation of more meaningful and intuitive auditory displays

    Sensing Super-Position: Human Sensing Beyond the Visual Spectrum

    Get PDF
    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems
    • …
    corecore