5,742 research outputs found

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    Music and Speech in Auditory Interfaces: When is One Mode More Appropriate Than the Other?

    Get PDF
    A number of experiments, which have been carried out using non-speech auditory interfaces, are reviewed and the advantages and disadvantages of each are discussed. The possible advantages of using non-speech audio media such as music are discussed – richness of the representations possible, the aesthetic appeal, and the possibilities of such interfaces being able to handle abstraction and consistency across the interface

    Impact of haptic 'touching' technology on cultural applications

    Get PDF
    No abstract available

    Communicating graphical information to blind users using music : the role of context

    Get PDF
    We describe the design and use of AUDIOGRAPH - a tool for investigating the use of music in the communication of graphical information to blind and partially sighted users. This paper examines the use of the system to communicate complex diagrams and gives some examples of user output. Performance is not as good as expected and it is postulated that context will play an important part in the perception of diagrams communicated using music. A set of experiments are reported which indicate that context does indeed seem to play an important role in assisting meaningful understanding of the diagrams communicated. The implications for using music in auditory interface design are discussed

    The Use of Audio in Minimal Access Surgery

    Get PDF
    In minimal access surgery (MAS) (also known as minimally invasive surgery), operations are carried out by making small incisions in the skin and inserting special apparatus into potential body cavities through those incisions. Laparoscopic MAS procedures are conducted in the patient’s abdomen. The aim of MAS is faster recovery, shorter hospitalisation and fewer major post-operative complications; all resulting in lower societal cost with better patient acceptability. The technique is markedly dependent on supporting technologies for vision, instrumentation, energy delivery, anaesthesia, and monitoring. However, in practice, much MAS continues to take longer and be associated with an undesirable frequency of unwanted minor (or occasionally major) mishaps. Many of these difficulties result precisely from the complexity and mal-adaptation of the additional technology and from lack of familiarity with it. A survey of South East England surgeons showed the two main stress factors on surgeons to be the technical difficulty of the procedure and time pressures placed on the surgeon by third parties. Many of the problems associated with MAS operations are linked to the control and monitoring of the equipment. This paper describes work begun to explore ergonomic enhancements to laparoscopic operating technology that could result in faster and safer laparoscopic operations, less surgeon stress and reduce dependence on ancillary staff. Auditory displays have been used to communicate complex information to users in a modality that is complementary to the visual channel. This paper proposes the development of a control and feedback system that will make use of auditory displays to improve the amount of information that can be communicated to the surgeon and his assistant without overloading the visual channel. Control of the system would be enhanced by the addition of voice input to allow the surgeon direct control

    Embodied Musical Interaction

    Get PDF
    Music is a natural partner to human-computer interaction, offering tasks and use cases for novel forms of interaction. The richness of the relationship between a performer and their instrument in expressive musical performance can provide valuable insight to human-computer interaction (HCI) researchers interested in applying these forms of deep interaction to other fields. Despite the longstanding connection between music and HCI, it is not an automatic one, and its history arguably points to as many differences as it does overlaps. Music research and HCI research both encompass broad issues, and utilize a wide range of methods. In this chapter I discuss how the concept of embodied interaction can be one way to think about music interaction. I propose how the three “paradigms” of HCI and three design accounts from the interaction design literature can serve as a lens through which to consider types of music HCI. I use this conceptual framework to discuss three different musical projects—Haptic Wave, Form Follows Sound, and BioMuse

    Haptic Wave

    Get PDF
    We present the Haptic Wave, a device that allows cross-modal mapping of digital audio to the haptic domain, intended for use by audio producers/engineers with visual impairments. We describe a series of participatory design activities adapted to non-sighted users where the act of prototyping facilitates dialog. A series of workshops scoping user needs, and testing a technology mock up and lo-fidelity prototype fed into the design of a final high-spec prototype. The Haptic Wave was tested in the laboratory, then deployed in real world settings in recording studios and audio production facilities. The cross-modal mapping is kinesthetic and allows the direct manipulation of sound without the translation of an existing visual interface. The research gleans insight into working with users with visual impairments, and transforms perspective to think of them as experts in non-visual interfaces for all users. This received the Best Paper Award at CHI 2016, the most prestigious human-computer interaction conference and one of the top-ranked conferences in computer science

    Analysis on Using Synthesized Singing Techniques in Assistive Interfaces for Visually Impaired to Study Music

    Get PDF
    Tactile and auditory senses are the basic types of methods that visually impaired people sense the world. Their interaction with assistive technologies also focuses mainly on tactile and auditory interfaces. This research paper discuss about the validity of using most appropriate singing synthesizing techniques as a mediator in assistive technologies specifically built to address their music learning needs engaged with music scores and lyrics. Music scores with notations and lyrics are considered as the main mediators in musical communication channel which lies between a composer and a performer. Visually impaired music lovers have less opportunity to access this main mediator since most of them are in visual format. If we consider a music score, the vocal performer’s melody is married to all the pleasant sound producible in the form of singing. Singing best fits for a format in temporal domain compared to a tactile format in spatial domain. Therefore, conversion of existing visual format to a singing output will be the most appropriate nonlossy transition as proved by the initial research on adaptive music score trainer for visually impaired [1]. In order to extend the paths of this initial research, this study seek on existing singing synthesizing techniques and researches on auditory interfaces
    • 

    corecore