101 research outputs found

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    The perceived hazard of earcons in information technology exception messages: The effect of musical dissonance: Working paper series--10-03

    Get PDF
    Users of information technology (IT) commonly encounter exception messages during their interactions with application programs to signal a computing problem or error. Exception messages often are accompanied by earcons which are aural messages of a musical nature used in the human-computer interface to provide information and feedback about some computer object, operation, or interaction. Utilizing the notion of musical dissonance earcons were designed that vary as to their degree of aural disagreeableness along a rank order scale. It is hypothesized that in the context of IT exception messages earcons with a higher degree of musical dissonance (aural disagreeableness) would be perceived as communicating a higher degree of hazard associated with the underlying computing problem signaled by an exception message. Participants rated the degree of hazard of each earcon presented in a random order in an experiment. Results of the data analysis indicate partial support of the hypothesis. The implications are that it may be possible to increase the degree of hazard matching in IT environments by designing earcons that accompany exception messages to communicate different levels of perceived hazard of an underlying computer problem

    Mobile Service Awareness via Auditory Notifications

    Get PDF
    Placed within the realms of Human Computer Interaction, this thesis contributes towards the goals of Ubiquitous Computing, where mobile devices can provide anywhere, anytime support to people’s everyday activities. With interconnected computing devices distributed in our habitat, services relevant to any situation may be always available to address our needs. However, despite the enhanced capabilities of mobile phones, users had been reluctant to adopt any services other than calls and messaging. This has been changing more recently, especially since the launch of the iPhone, with users getting access to hundreds of services. The original question motivating the research presented in this thesis “How can we improve mobile service usage?” is in the interest of enthusiasts of mobile services as well as slow adopters. We propose the concept of ‘mobile service awareness’ and operationalise it through the more focused research question: “How can we design for non-intrusive yet informative auditory mobile service notifications?” We design and conduct a series of surveys, laboratory experiments and longitudinal field studies to address this question. Our results, also informed by literature on context-aware computing, awareness, notification systems and auditory interface design, produce two distinct major contributions. First, we provide a set of conclusions on the relative efficiency of auditory icons and earcons as auditory notifications. Second, we produce a set of design guidelines for the two types of notifications, based on the critical evaluation of the methodologies we develop and adapt from the literature. Although these contributions were made with mobile service notification in mind, they are arguably useful for designers of any auditory interfaces conveying complex concepts (such as mobile services) and are used in attention demanding contexts.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    자율주행 맥락하에서 차량 내 청각 신호음에 대한 사용자 선호도 분석 연구

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 공과대학 산업공학과, 2022. 8. 윤명환.최근에는 차량 내에 자율 기술의 발달로 자율주행차의 기능이 고도화가 이루어졌고, 우리가 알고 있는 인간-차량 인터랙션은 점점 인간-로봇 인터랙션으로 패러다임이 변하고 있다. 차량 내에서 청각 유저 인터페이스는 운전자의 인지 부하를 줄이고, 운전자에게 정보를 제공하기 위해 차량 기술에 사용되고 있다. 그러나, 자율주행차는 새로운 기술 도메인으로 인해 사용자, 즉, 탑승자 사용맥락과 시나리오에 따라 청각 피드백 유형 설계가 필요하다. 본 논문에서는 세 가지 주요 연구 목표는 (1) 탑승자의 관점에 기반한 자율주행차량에 대한 직관적인 청각 피드백 설계 제안, (2) 자율주행차량에 적용된 청각 피드백에 대한 선호도, 그리고, (3) 자율주행차에서 필요한 청각 사용자 경험 시나리오 도출하는 것이다. 본 논문의 연구 목적을 달성하기 위해 제작된 청각적 피드백의 인지용이성, 직관성 일관성 또는 적절성을 측정하여 청각 피드백의 유형 및 정보 유형의 패턴이 탑승객의 선호에 영향을 미치는지 조사하는 방향을 잡았다. 본 논문은 파일럿 테스트와 대규모 온라인 사운드 평가로 실험을 진행하였다. 파일럿 테스트에는 총 13명 참가자 연령 27.23세(±7.53) 대상으로 실시하였고, 제작된 사운드 샘플의 의도된 정보(조작 확인음, 조작 오류음, 감지음, 진행음, 약경고음, 강경고음)와 인지적으로 용이성과 직관성이 있는지 진행하였다. 추가적으로, 청각 피드백이 필요한 자율주행차량의 시나리오를 도출하기 위해서도 빈도분석을 진행하였다. 인지용이성과 직관성 평가를 통해 얻은 데이터는 분산분석(ANOVA)과 다중 비교를 위한 본페로니 사후 검정을 사용하여 분석하였다. 파일럿 테스트 결과, 진행음 피드백을 제외하고 모든 사운드 샘플이 의도된 정보로 직관적으로 설계된 것으로 확인하였다. 따라서, 본 논문에서 사용되는 진행음을 다시 설계 및 제작해야 하는 것이다. 또한, 개발된 27가지 시나리오 중 탑승객의 사용맥락에 기반한 청각 피드백이 필요한 상황이 15가지 필수 시나리오로 도출하였다. 파일럿 테스트를 이어서, 사운드 평가는 평균 연령이 37.15세(±11.4)인 총 125명의 참가자를 대상으로 온라인으로 대규모 수행되었으며, 7점 척도로 일관성/적절성 측정을 통해서 어떤 사운드 유형 (이어콘과 오디토리 아이콘의 혼합 또는 일련의 이어콘/오디토리 아이콘)을 선호하는지 조사하였다. 진행음은 올라가는 멜로, 내려가는 멜로, 변형 및 단순 음색으로 4가지 파라미터로 재제작하였다. 본 평가에서 얻어낸 일관성/적절성 데이터는 각 사운드 세트에 대한 쌍별 t-테스트 비교를 사용하여 분석하였다. 진행음은 만족도로 측정하여 분산분석(ANOVA)을 사용하여 분석하였다. 마지막으로, 참가자들의 의견들을 정성적 분석을 위해 텍스트 네트워크 분석으로 시각화를 하였다. 각 시나리오에 독립적인 표번 t-테스트의 결과에 따르면, 사용자가 자율주행차량 탑승의 시나리오에서 이어콘과 오디토리 아이콘의 혼합보다는 일관된 사운드 세트를 선호한다는 결과가 나타났다. 또한, 진행음의 결과는 내려가는 멜로디와 단순 음색의 파라미터로 높은 만족도가 보였다. 마지막으로, 본 논문의 토의 부분에서는 파일럿 테스트와 온라인 대규모 사운드 평가 실험을 통해서 얻은 결과를 바탕으로 연구 목표의 달성에 대해 토의하였다. 결론 부분에서는 본 논문의 한계점과 향후 연구 방향에 대해서도 논의하였다.The rise of autonomous technology that has been incorporated into vehicles allows the autonomous vehicles to shifted its functionality as an interactive system where providing interaction and feedback between the user and system is essential. In addition, auditory user interface has been used in vehicle technology to reduce cognitive workload and provide information to the drivers. However, autonomous vehicle is still regarded as a new technology domain, and it is necessary to investigate what type of in-vehicle signals feedback that should be designed to the passenger depending on the context-of-use and scenarios involved. In this thesis, the three main research aims are; (1) to present a design proposal for in-vehicle signals feedback for autonomous vehicles based on passenger’s perspective, (2) to explore the passenger’s sound preference for in-vehicle signals feedback used in autonomous vehicle, and (3) to suggest a fully derived scenario when designing an in-vehicle signals feedback used in autonomous vehicles based on user-centered design process. To achieve the research aim, this thesis focuses on investigating whether the design of in-vehicle signal types such as earcon and auditory icon, and temporal pattern of information signal types would affect the passenger’s preferences by measuring its perceivability, intuitiveness and consistency or appropriateness as an in-vehicle signal. This thesis includes two experiments; a pilot test and a large scale online sound evaluation study. Prior to the sound set evaluation, a pilot test was conducted on a total of 13 participants with an average age of 27.23(±7.53) to investigate whether the auditory sound sample that was created for sound evaluation has the congruity that matches with the intended information (confirmatory, error, detection, in progress, alert and warning), and to further develop the scenario for passengers in autonomous vehicles context. There were two measures used for the pilot test, which is perceivability and intuitiveness to determine if the designed sound sample with temporal pattern matches with the intended information as this paper suggested. The pilot test was conducted in an acoustic chamber, and participants were asked to give their evaluation in a 7-points Likert scale for perceivability and intuitiveness of the sound samples, and conducted survey of multiple choices to select the appropriate scenarios for each sound. The data obtained for perceivability and intuitiveness were analyzed using analysis of variance (ANOVA) and Bonferroni correction post-hoc test for multiple comparisons. Result of the pilot test shown that all sound samples are perceivable intuitively designed with the intended information, except for in progress type signal. Hence, in progress type signals will need to be re-created for this study. Also, out of the 27 scenarios that was developed prior to the pilot study, this study narrowed down 15 essential scenarios which in-vehicle signal feedbacks are imperative to autonomous vehicles based on passenger’s context. The sound set evaluation was conducted online with a total of 125 participants with an average age of 37.15(±11.4) to investigate which type of sounds (a mixture of earcons and auditory icons, or a set of earcon/auditory icon consecutively) they prefer by measuring consistency/appropriateness measure in 7-points Likert scale. In progress sounds were re-created in ascending, descending, variated and simple tone parameters, and were evaluated by its satisfaction measures. The data obtained for consistency/appropriateness were analyzed using pairwise t-test comparison for each sound sets. The in-progress sounds were analyzed using four-way analysis of variance (ANOVA). Lastly, all of the participants’ opinions were collected for qualitative analysis by performing text network analysis for visualization. Results from the independent samples t-tests for each scenario shown that users or listeners prefer a consistent ‘family’ of sounds, rather than a mixture of earcons and auditory icons in a scenario. The result from the in-progress sounds also shows that a descending-simple tone melody sounds has high satisfaction level. In the discussion, this study discussed whether the research aim is fulfilled based on the results obtained and added implications for the sound design. In summary and conclusion, this study also discussed the limitation of this study and the future direction.Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Research Objectives 6 1.3 Organization of the Thesis 6 Chapter 2 Literature Review 7 2.1 Auditory Types 7 2.1.1 Earcons and Auditory Icons 7 2.1.2 Auditory Information Types 8 2.1.3 Acoustic Parameters 11 2.2 Auditory User Interface (AUI) 12 2.2.1 Auditory User Interface (AUI) in Autonomous Vehicles 12 2.2.2 Auditory User Experience Measurement 12 2.3 Ideation for Scenario Development for Autonomous Vehicles 14 2.3.1 Context-of-use of Autonomous Vehicles 14 2.3.2 Scenario Development 15 Chapter 3 Sound Experiment and Evaluation 19 3.1 Pilot Test 19 3.1.1 Overview and aim for pilot test 19 3.1.2 Participants 19 3.1.3 Stimuli 20석

    An empirical investigation to examine the usability issues of using adaptive, adaptable, and mixed-initiative approaches in in-teractive systems

    Get PDF
    The combination of graphical user interface (GUI) and usability evaluation presents an advantage to mastering every piece of software and ensuring perfect quality of work. The increasing demand for online learning is becoming more important, both individually and academically. This thesis introduces and describes an empirical study to investigate and compare how vocabulary can be learned by using different interactive approaches; specifically, a static learning website (with straightforward words and meanings), an adaptable learning website (allowing the user to choose a learning method), an adaptive learning website (a system-chosen way of learning), and a mixed-initiative (mixing approaches and techniques). The purpose of this study is to explore and determine the effects of these approaches in learning vocabu-lary achievement to enhance vocabulary learning for non-English speakers. The par-ticipants were Arabic speakers. The three levels of vocabulary learning activities were categorised as easy, medium, and hard. The independent variables (IVs) were controlled during the experiment to ensure consistency and were as follows: tasks, learning effects, and time. The dependent variables (DVs) were learning vocabulary achievements and scores. Two aims were explored in relation to the effects of these approaches to achievement. The first related to learning vocabularies for non-English speakers tackling the difficulties of the English language and the second related to studying system usability of learning English vocabulary in terms of usability measures (efficiency, frequency of error occurrence, effectiveness, and satisfaction). For this purpose, a vocabulary-learning language website was designed, implement-ed, and tested empirically. To fulfill these requirements, it was first necessary to measure two usability components (efficiency and effectiveness) with a within-subject design of n = 24 subjects recruited and, for users’ satisfaction, a between-subject design of n = 99 subjects recruited, while investigating satisfaction with a system usability scale (SUS) survey. The results and data analysis were described. Overall, the results shown were all satisfactory

    DESIGN FOUNDATIONS FOR CONTENT-RICH ACOUSTIC INTERFACES: INVESTIGATING AUDEMES AS REFERENTIAL NON-SPEECH AUDIO CUES

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)To access interactive systems, blind and visually impaired users can leverage their auditory senses by using non-speech sounds. The current structure of non-speech sounds, however, is geared toward conveying user interface operations (e.g., opening a file) rather than large theme-based information (e.g., a history passage) and, thus, is ill-suited to signify the complex meanings of primary learning material (e.g., books and websites). In order to address this problem, this dissertation introduces audemes, a new category of non-speech sounds, whose semiotic structure and flexibility open new horizons for facilitating the education of blind and visually impaired students. An experiment with 21 students from the Indiana School for the Blind and Visually Impaired (ISBVI) supports the hypothesis that audemes increase the retention of theme-based information. By acting as memory catalysts, audemes can play an important role in enhancing the aural interaction and navigation in future sound-based user interfaces. For this dissertation, I designed an Acoustic EDutainment INterface (AEDIN) that integrates audemes as a way by which to vividly anticipate text-to-speech theme-based information and, thus, act as innovative aural covers. The results of two iterative usability evaluations with total of 20 blind and visually impaired participants showed that AEDIN is a highly usable and enjoyable acoustic interface. Yet, designing well-formed audemes remains an ad hoc process because audeme creators can only rely on their intuition to generate meaningful and memorable sounds. In order to address this problem, this dissertation presents three experiments, each with 10 blind and visually impaired participants. The goal was to examine the optimal combination of audeme attributes, which can be used to facilitate accurate recognitions of audeme meanings. This work led to the creation of seven basic guidelines that can be used to design well-formed audemes. An interactive application tool (ASCOLTA: Advanced Support and Creation-Oriented Library Tool for Audemes) operationalized these guidelines to support individuals without an audio background in designing well-formed audemes. An informal evaluation conducted with three teachers from the ISBVI, supports the hypothesis that ASCOLTA is a useful tool by which to facilitate the integration of audemes into the teaching environment

    Usability engineering of interactive voice responsive (IVR) systems in oral users of Southern Africa

    Get PDF
    Includes bibliographical references (p. 96-109).This research study focuses on the feasibility of using the telephone as a tool for information access in the oral communities of Southern Africa. The OpenPhone and BGR systems are used as case studies and their designs have been influenced by field studies with the targeted users. The OpenPhone project aims to design an Interactive Voice Response (IVR) health information system that enables people who are caregivers for HIV/AIDS infected children to access relevant care-giving information by using a telephone in their native language of Setswana in Botswana, Southern Africa. The BGR system allows soccer fans to access results of recently played matches in Premier Soccer League (PSL) of South Africa

    The role of edutainment in e-learning: An empirical study.

    Get PDF
    Impersonal, non-face-to-face contact and text-based interfaces, in the e-Learning segment, present major problems that are encountered by learners, since they are out on vital personal interactions and useful feedback messages, as well as on real-time information about their learning performance. This research programme suggests a multimodal, combined with an edutainment approach, which is expected to improve the communications between users and e-Learning systems. This thesis empirically investigates users’ effectiveness; efficiency and satisfaction, in order to determine the influence of edutainment, (e.g. amusing speech and facial expressions), combined with multimodal metaphors, (e.g. speech, earcon, avatar, etc.), within e-Learning environments. Besides text, speech, visual, and earcon modalities, avatars are incorporated to offer a visual and listening realm, in online learning. The methodology used for this research project comprises a literature review, as well as three experimental platforms. The initial experiment serves as a first step towards investigating the feasibility of completing all the tasks and objectives in the research project, outlined above. The remaining two experiments explore, further, the role of edutainment in enhancing e-Learning user interfaces. The overall challenge is to enhance user-interface usability; to improve the presentation of learning, in e-Learning systems; to improve user enjoyment; to enhance interactivity and learning performance; and, also, to contribute in developing guidelines for multimodal involvement, in the context of edutainment. The results of the experiments presented in this thesis show an improvement in user enjoyment, through satisfaction measurements. In the first experiment, the enjoyment level increased by 11%, in the Edutainment (E) platform, compared to the Non-edutainment (NE) interface. In the second experiment, the Game-Based Learning (GBL) interface obtained 14% greater enhancement than the Virtual Class (VC) interface and 20.85% more than the Storytelling interface; whereas, the percentage obtained by the game incorporated with avatars increased by an extra 3%, compared with the other platforms, in the third experiment. In addition, improvement in both user performance and learning retention were detected through effective and efficiency measurements. In the first experiment, there was no significant difference between mean values of time, for both conditions (E) & (NE) which were not found to be significant, when tested using T-test. In the second experiment, the time spent in condition (GBL) was higher by 7-10 seconds, than in the other conditions. In the third experiment, the mean values of the time taken by the users, in all conditions, were comparable, with an average of 22.8%. With regards to effectiveness, the findings of the first experiment showed, generally, that the mean correct answer for condition (E) was higher by 20%, than the mean for condition (NE). Users in condition (GBL) performed better than the users in the other conditions, in the second experiment. The percentage of correct answers, in the second experiment, was higher by 20% and by 34.7%, in condition (GBL), than in the (VC) and (ST), respectively. Finally, a set of empirically derived guidelines was produced for the design of usable multimodal e-Learning and edutainment interfaces.Libyan Embass

    StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible

    Full text link
    Blind people frequently encounter inaccessible dynamic touchscreens in their everyday lives that are difficult, frustrating, and often impossible to use independently. Touchscreens are often the only way to control everything from coffee machines and payment terminals, to subway ticket machines and in-flight entertainment systems. Interacting with dynamic touchscreens is difficult non-visually because the visual user interfaces change, interactions often occur over multiple different screens, and it is easy to accidentally trigger interface actions while exploring the screen. To solve these problems, we introduce StateLens - a three-part reverse engineering solution that makes existing dynamic touchscreens accessible. First, StateLens reverse engineers the underlying state diagrams of existing interfaces using point-of-view videos found online or taken by users using a hybrid crowd-computer vision pipeline. Second, using the state diagrams, StateLens automatically generates conversational agents to guide blind users through specifying the tasks that the interface can perform, allowing the StateLens iOS application to provide interactive guidance and feedback so that blind users can access the interface. Finally, a set of 3D-printed accessories enable blind people to explore capacitive touchscreens without the risk of triggering accidental touches on the interface. Our technical evaluation shows that StateLens can accurately reconstruct interfaces from stationary, hand-held, and web videos; and, a user study of the complete system demonstrates that StateLens successfully enables blind users to access otherwise inaccessible dynamic touchscreens.Comment: ACM UIST 201
    corecore