1,511 research outputs found

    Making touch-based kiosks accessible to blind users through simple gestures

    Get PDF
    Touch-based interaction is becoming increasingly popular and is commonly used as the main interaction paradigm for self-service kiosks in public spaces. Touch-based interaction is known to be visually intensive, and current non-haptic touch-display technologies are often criticized as excluding blind users. This study set out to demonstrate that touch-based kiosks can be designed to include blind users without compromising the user experience for non-blind users. Most touch-based kiosks are based on absolute positioned virtual buttons which are difficult to locate without any tactile, audible or visual cues. However, simple stroke gestures rely on relative movements and the user does not need to hit a target at a specific location on the display. In this study, a touch-based train ticket sales kiosk based on simple stroke gestures was developed and tested on a panel of blind and visually impaired users, a panel of blindfolded non-visually impaired users and a control group of non-visually impaired users. The tests demonstrate that all the participants managed to discover, learn and use the touch-based self-service terminal and complete a ticket purchasing task. The majority of the participants completed the task in less than 4 min on the first attempt

    Factors Affecting the Accessibility of IT Artifacts : A Systematic Review

    Get PDF
    Accessibility awareness and development have improved in the past two decades, but many users still encounter accessibility barriers when using information technology (IT) artifacts (e.g., user interfaces and websites). Current research in information systems and human-computer interaction disciplines explores methods, techniques, and factors affecting the accessibility of IT artifacts for a particular population and provides solutions to address these barriers. However, design realized in one solution should be used to provide accessibility to the widest range of users, which requires an integration of solutions. To identify the factors that cause accessibility barriers and the solutions for users with different needs, a systematic literature review was conducted. This paper contributes to the existing body of knowledge by revealing (1) management- and development-level factors, and (2) user perspective factors affecting accessibility that address different accessibility barriers to different groups of population (based on the International Classification of Functioning by the World Health Organization). Based on these findings, we synthesize and illustrate the factors and solutions that need to be addressed when creating an accessible IT artifact.© 2022 by the Association for Information Systems. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than the Association for Information Systems must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists requires prior specific permission and/or fee. Request permission to publish from: AIS Administrative Office, P.O. Box 2712 Atlanta, GA, 30301-2712 Attn: Reprints are via e-mail from [email protected]=vertaisarvioitu|en=peerReviewed

    Factors Affecting the Accessibility of IT Artifacts: A Systematic Review

    Get PDF
    Accessibility awareness and development have improved in the past two decades, but many users still encounter accessibility barriers when using information technology (IT) artifacts (e.g., user interfaces and websites). Current research in information systems and human-computer interaction disciplines explores methods, techniques, and factors affecting the accessibility of IT artifacts for a particular population and provides solutions to address these barriers. However, design realized in one solution should be used to provide accessibility to the widest range of users, which requires an integration of solutions. To identify the factors that cause accessibility barriers and the solutions for users with different needs, a systematic literature review was conducted. This paper contributes to the existing body of knowledge by revealing (1) management- and development-level factors, and (2) user perspective factors affecting accessibility that address different accessibility barriers to different groups of population (based on the International Classification of Functioning by the World Health Organization). Based on these findings, we synthesize and illustrate the factors and solutions that need to be addressed when creating an accessible IT artifact

    Sweep-Shake: Finding Digital Resources in Physical Environments

    Get PDF
    In this article we describe the Sweep-Shake system, a novel, low interaction cost approach to supporting the spontaneous discovery of geo-located information. By sweeping a mobile device around their environment, users browse for interesting information related to points of interest. We built a mobile haptic prototype which encourages the user to explore their surroundings to search for location information, helping them discover this by providing directional vibrotactile feedback. Once potential targets are selected, the interaction is extended to offer an hierarchy of information levels with a simple method for filtering and selecting desired types of data for each geo-tagged location. We describe and motivate our approach and present a short field trial to situate our design in a real environment, followed by a more detailed user study that compares it against an equivalent visual-based system

    An Investigation of Target Acquisition with Visually Expanding Targets in Constant Motor-space

    Get PDF
    Target acquisition is a core part of modern computer use. Fitts’ law has frequently been proven to predict performance of target acquisition tasks; even with targets that change size as the cursor approaches. Research into expanding targets has focussed on targets that expand in both visual- and motor-space. We investigate whether a visual expansion with no change in motor-space offers any performance benefit. We investigate constant motor-space visual expansion in both abstract pointing tasks (based on the ISO9241–9 standard) and in a realistic deployment of the technique within fisheye menus. Our fisheye menu system eliminates the ‘hunting effect’ of target acquisition observed in Bederson’s initial proposal of fisheye menus, and in an evaluation we show that it allows faster selection times and is subjectively preferred to Bederson’s menus. We also show that visually expanding targets can improve selection times in target acquisition tasks, particularly with small targets

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Accessible On-Body Interaction for People With Visual Impairments

    Get PDF
    While mobile devices offer new opportunities to gain independence in everyday activities for people with disabilities, modern touchscreen-based interfaces can present accessibility challenges for low vision and blind users. Even with state-of-the-art screenreaders, it can be difficult or time-consuming to select specific items without visual feedback. The smooth surface of the touchscreen provides little tactile feedback compared to physical button-based phones. Furthermore, in a mobile context, hand-held devices present additional accessibility issues when both of the users’ hands are not available for interaction (e.g., on hand may be holding a cane or a dog leash). To improve mobile accessibility for people with visual impairments, I investigate on-body interaction, which employs the user’s own skin surface as the input space. On-body interaction may offer an alternative or complementary means of mobile interaction for people with visual impairments by enabling non-visual interaction with extra tactile and proprioceptive feedback compared to a touchscreen. In addition, on-body input may free users’ hands and offer efficient interaction as it can eliminate the need to pull out or hold the device. Despite this potential, little work has investigated the accessibility of on-body interaction for people with visual impairments. Thus, I begin by identifying needs and preferences of accessible on-body interaction. From there, I evaluate user performance in target acquisition and shape drawing tasks on the hand compared to on a touchscreen. Building on these studies, I focus on the design, implementation, and evaluation of an accessible on-body interaction system for visually impaired users. The contributions of this dissertation are: (1) identification of perceived advantages and limitations of on-body input compared to a touchscreen phone, (2) empirical evidence of the performance benefits of on-body input over touchscreen input in terms of speed and accuracy, (3) implementation and evaluation of an on-body gesture recognizer using finger- and wrist-mounted sensors, and (4) design implications for accessible non-visual on-body interaction for people with visual impairments

    Program Comprehension Through Sonification

    Get PDF
    Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired

    Impact of universal design ballot interfaces on voting performance and satisfaction of people with and without vision loss

    Get PDF
    Since the Help America Vote Act (HAVA) in 2002 that addressed improvements to voting systems and voter access through the use of electronic technologies, electronic voting systems have improved in U.S. elections. However, voters with disabilities have been disappointed and frustrated, because they have not been able to vote privately and independently (Runyan, 2007). Voting accessibility for individuals with disabilities has generally been accomplished through specialized designs, providing the addition of alternative inputs (e.g., headphones with tactile keypad for audio output, sip-and-puff) and outputs (e.g., audio output) to existing hardware and/or software architecture. However, while the add-on features may technically be accessible, they are often complex and difficult for poll workers to set up and require more time for targeted voters with disabilities to use compared to the direct touch that enable voters without disabilities to select any candidate in a particular contest at any time. To address the complexities and inequities with the accessible alternatives, a universal design (UD) approach was used to design two experimental ballot interfaces, namely EZ Ballot and QUICK Ballot, that seamlessly integrate accessible features (e.g., audio output) based on the goal of designing one voting system for all. EZ Ballot presents information linearly (i.e., one candidate’s name at a time) and voters can choose Yes or No inputs that does not require search (i.e., finding a particular name). QUICK Ballot presents multiple names that allow users to choose a name using direct-touch or gesture-touch interactions (e.g., the drag and lift gesture). Despite the same goal of providing one type of voting system for all voters, each ballot has a unique selection and navigation process designed to facilitate access and participation in voting. Thus, my proposed research plan was to examine the effectiveness of the two UD ballots primarily with respect to their different ballot structures in facilitating voting performance and satisfaction for people with a range of visual abilities including those with blindness or vision loss. The findings from this work show that voters with a range of visual abilities were able to use both ballots independently. However, as expected, the voter performance and preferences of each ballot interface differed by voters through the range of visual abilities. While non-sighted voters made fewer errors on the linear ballot (EZ Ballot), partially-sighted and sighted voters completed the random access ballot (QUICK Ballot) in less time. In addition, a higher percentage of non-sighted participants preferred the linear ballot, and a higher percentage of sighted participants preferred the random ballot. The main contributions of this work are in: 1) utilizing UD principles to design ballot interfaces that can be differentially usable by voters with a range of abilities; 2) demonstrating the feasibility of two UD ballot interfaces by voters with a range of visual abilities; 3) providing an impact for people with a range of visual abilities on other applications. The study suggests that the two ballots, both designed according to UD principles but with different weighting of principles, can be differentially usable by individuals with a range of visual abilities. This approach clearly distinguishes this work from previous efforts, which have focused on developing one UD solution for everyone because UD does not dictate a single solution for everyone (e.g., a one-size-fits-all approach), but rather supports flexibility in use that provide a new perspective into human-computer interaction (Stephanidis, 2001).Ph.D
    corecore