7 research outputs found

    Towards intelligent, adaptive input devices for users with physical disabilities

    Get PDF
    This thesis presents a novel application of user modelling, the domain of interest being the physical abilities of the user of a computer input device. Specifically, it describes a model which identifies aspects of keyboard use with which the user has difficulty. The model is based on data gathered in an empirical study of keyboard and mouse use by people with and without motor disabilities. In this study, many common input errors due to physical inaccuracies in using keyboards and mice were observed. For the majority of these errors, there exist keyboard or mouse configuration facilities intended to reduce or eliminate them. While such facilities are now integrated into the majority of modem operating systems, there is little published data describing their effect on keyboard or mouse usability. This thesis offers evidence that they can be extremely useful, even essential, but that further research and interface development are required. This thesis presents a user model which focuses on four of the most commonly observed keyboard difficulties. The model also makes recommendations for settings for three keyboard configuration facilities, each of which tackle one of these specific difficulties. As a user modelling task, this application presents a number of interesting challenges. Different users will have very different configuration requirements, and the requirements of individual users may also change over long or short periods of time. Some users will have cognitive impairments. Users may have very limited time and energy to devote to computer use. In response, this research has investigated the extent to which it is possible to model users without interrupting the task for which they are using a computer in the first place. This approach is appealing because it does not require users to spend time participating in model instantiation. This focus on inference rather than explicit testing or questioning also allows the model to dynamically track an individual user's changing requirements. This thesis shows that within the context of the keyboard difficulties studied, such an approach is feasible. The implemented model records users' keyboard input unintrusiveiy as they perform their own input tasks. This input is examined for evidence of certain types of input error or indications of difficulties in using the keyboard. In the model presented, conclusions are based on the assumption that the user is typing English text in a word processing application. However, the design of the model allows any other textual language to be used. A second empirical study, evaluating the model, is described. The model is shown to be very accurate in identifying users having difficulties in each of the areas tackled, the only exception being those who find a given operation awkward, but are able to perform it accurately. Where it is also possible to evaluate the configuration recommendations made by the model, the chosen settings are effective in reducing input errors and increasing user satisfaction with the keyboard. The model is also able to draw conclusions quickly for users with higher error rates, and shows good overall stability. In the light of this successful identification of keyboard difficulties, potential applications of the model are suggested. It could be used to help occupational therapists and assistive technologists to assess the keyboard configuration requirements of a new user. It could also be made available to users themselves - many people are currently unaware of facilities they may find useful, and how to activate them. The model could be extended to other areas of keyboard use, and to other input devices. This would allow systems to provide automatic, dynamic support for configuration, which would go some way towards improving the accessibility of computer systems for people with motor disabilities

    Using handhelds to help people with motor impairments

    Get PDF

    CIRCLING INTERFACE: AN ALTERNATIVE INTERACTION METHOD FOR ON-SCREEN OBJECT MANIPULATION

    Get PDF
    An alternative interaction method, called the circling interface, was developed and evaluated for individuals with disabilities who find it difficult or impossible to consistently and efficiently perform pointing operations involving the left and right mouse buttons. The circling interface is a gesture-based interaction technique. To specify a target of interest, the user makes a circling motion around the target. To specify a desired pointing command with the circling interface, each edge of the screen is used. The user selects a command before circling the target. Empirical evaluations were conducted with human subjects from three different groups (individuals without disability, individuals with spinal cord injury, and individuals with cerebral palsy), comparing each group's performance on pointing tasks with the circling interface to performance on the same tasks when using a mouse button or dwell-clicking software. Across all three groups, the circling interface was faster than the dwelling interface (although the difference was not statistically significant). For the single-click operation, the circling interface was slower than dwell selection, but for both double-click and drag-and-drop operations, the circling interface was faster. In terms of performance accuracy, the results were mixed: for able-bodied subjects circling was more accurate than dwelling, for subjects with SCI dwelling was more accurate than circling, and for subjects with CP there was no difference. However, if errors caused by circling on an area with no target or by ignoring circles that are too small or too fast were automatically corrected by the circling interface, the performance accuracy of the circling interface would significantly outperform dwell selection. This suggests that the circling interface can be used in conjunction with existing pointing techniques and this combined approach may provide more effective mouse use for people with pointing problems. Consequently, the circling interface can improve clinical practice by providing an alternative pointing method that does not require physically activating mouse buttons and is more efficient than dwell-clicking. It is also expected to be useful for both computer access and augmentative communication software

    ACCESSIBILITY IN CONTEXT: UNDERSTANDING THE TRULY MOBILE EXPERIENCE OF USERS WITH MOTOR IMPAIRMENTS

    Get PDF
    Touchscreen smartphones are becoming broadly adopted by the US population. Ensuring that these devices are accessible for people with disabilities is critical for equal access. For people with motor impairments, the vast majority of studies on touchscreen mobile accessibility have taken place in the laboratory. These studies show that while touchscreen input offers advantages, such as requiring less strength than physical buttons, it also presents accessibility challenges, such as the difficulty of tapping on small targets or making multitouch gestures. However, because of the focus on controlled lab settings, past work does not provide an understanding of contextual factors that impact smartphone use in everyday life, and the activities these devices enable for people with motor impairments. To investigate these issues, this thesis research includes two studies, first, an in-person study with four participants with motor impairments that included diary entries and an observational session, and, secondarily, an online survey with nine respondents. Using case study analysis for the in-person participants, we found that mobile devices have the potential to help motor-impaired users reduce the physical effort required for everyday tasks (e.g., turning on a TV, checking transit accessibility in advance), that challenges in touchscreen input still exist, and that the impact of situational impairments to this population can be impeding. The online survey results confirm these findings, for example, highlighting the difficulty of text input, particularly when users are out and mobile rather than at home. Based on these findings, future research should focus on the enhancement of current touchscreen input, exploring the potential of wearable devices for mobile accessibility, and designing more applications and services to improve access to physical world

    The design and evaluation of non-visual information systems for blind users

    Get PDF
    This research was motivated by the sudden increase of hypermedia information (such as that found on CD-ROMs and on the World Wide Web), which was not initially accessible to blind people, although offered significant advantages over traditional braille and audiotape information. Existing non-visual information systems for blind people had very different designs and functionality, but none of them provided what was required according to user requirements studies: an easy-to-use non-visual interface to hypermedia material with a range of input devices for blind students. Furthermore, there was no single suitable design and evaluation methodology which could be used for the development of non-visual information systems. The aims of this research were therefore: (1) to develop a generic, iterative design and evaluation methodology consisting of a number of techniques suitable for formative evaluation of non-visual interfaces; (2) to explore non-visual interaction possibilities for a multimodal hypermedia browser for blind students based on user requirements; and (3) to apply the evaluation methodology to non-visual information systems at different stages of their development. The methodology developed and recommended consists of a range of complementary design and evaluation techniques, and successfully allowed the systematic development of prototype non-visual interfaces for blind users by identifying usability problems and developing solutions. Three prototype interfaces are described: the design and evaluation of two versions of a hypermedia browser; and an evaluation of a digital talking book. Recommendations made from the evaluations for an effective non-visual interface include the provision of a consistent multimodal interface, non-speech sounds for information and feedback, a range of simple and consistent commands for reading, navigation, orientation and output control, and support features. This research will inform developers of similar systems for blind users, and in addition, the methodology and design ideas are considered sufficiently generic, but also sufficiently detailed, that the findings could be applied successfully to the development of non-visual interfaces of any type

    A model of keyboard configuration requirements

    No full text
    corecore