21,698 research outputs found

    The design of sonically-enhanced widgets

    Get PDF
    This paper describes the design of user-interface widgets that include non-speech sound. Previous research has shown that the addition of sound can improve the usability of human–computer interfaces. However, there is little research to show where the best places are to add sound to improve usability. The approach described here is to integrate sound into widgets, the basic components of the human–computer interface. An overall structure for the integration of sound is presented. There are many problems with current graphical widgets and many of these are difficult to correct by using more graphics. This paper presents many of the standard graphical widgets and describes how sound can be added. It describes in detail usability problems with the widgets and then the non-speech sounds to overcome them. The non-speech sounds used are earcons. These sonically-enhanced widgets allow designers who are not sound experts to create interfaces that effectively improve usability and have coherent and consistent sounds

    Correcting menu usability problems with sound

    Get PDF
    Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound

    Foodservice at events as a strategy for sustainable food consumption. Workshop held as a part of the conference: Joint Actions on Climate Change, Aalborg Congress & Culture Centre, Denmark, June 9th-10th, 2009.

    Get PDF
    As a part of the Joint Actions on Climate Change conference held on June 9th and 10th 2009 in the Aalborg Congress & Culture Centre, Denmark, the Danish iPOPY group organized a workshop on “Food service at events as a strategy for sustainable food consumption.” The workshop was part of the Conference’s fourth Theme, Governance & Climate Mitigation. The workshop was held on June 10th from 11.30-13.00 as part of the iPOPY project (innovative Public Organic food Procurement for Youth). The iPOPY-project (2007-2010) is one out of eight transnational pilot projects funded by the CORE Organic funding body network within the context of the European Research Area. The papers and the PowerPoint presentations from the event are presented in this publication along with the conclusions from the workshop. Thanks to Mia Brandhøj and Sofie Husby for coordinating and to Niels H. Kristensen for moderating the workshop. Thanks also to the Presenters for making their papers and presentations available

    The design and evaluation of a sonically enhanced tool palette

    Get PDF
    This paper describes an experiment to investigate the effectiveness of adding sound to tool palettes. Palettes have usability problems because users need to see the information they present, but they are often outside the area of visual focus. We used nonspeech sounds called earcons to indicate the current tool and when tool changes occurred so that users could tell what tool they were in wherever they were looking. Results showed a significant reduction in the number of tasks performed with the wrong tool. Therefore, users knew what the current tool was and did not try to perform tasks with the wrong one. All of this was not at the expense of making the tool palettes any more annoying to use

    Using non-speech sounds to provide navigation cues

    Get PDF
    This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time

    A Semantic Web Annotation Tool for a Web-Based Audio Sequencer

    Get PDF
    Music and sound have a rich semantic structure which is so clear to the composer and the listener, but that remains mostly hidden to computing machinery. Nevertheless, in recent years, the introduction of software tools for music production have enabled new opportunities for migrating this knowledge from humans to machines. A new generation of these tools may exploit sound samples and semantic information coupling for the creation not only of a musical, but also of a "semantic" composition. In this paper we describe an ontology driven content annotation framework for a web-based audio editing tool. In a supervised approach, during the editing process, the graphical web interface allows the user to annotate any part of the composition with concepts from publicly available ontologies. As a test case, we developed a collaborative web-based audio sequencer that provides users with the functionality to remix the audio samples from the Freesound website and subsequently annotate them. The annotation tool can load any ontology and thus gives users the opportunity to augment the work with annotations on the structure of the composition, the musical materials, and the creator's reasoning and intentions. We believe this approach will provide several novel ways to make not only the final audio product, but also the creative process, first class citizens of the Semantic We

    Improving the dining experience in schools

    Get PDF
    This resource was originally developed and produced by the Health Promotion Agency for Northern Ireland as part of the School food: top marks programme. "This guidance booklet brings together ideas and suggestions for your school to help improve the experience pupils have at lunch time." - introduction
    • …
    corecore