173 research outputs found

    Computer interfaces for the visually impaired

    Get PDF
    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    Accessibility requirements for human-robot interaction for socially assistive robots

    Get PDF
    Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Vázquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal

    The Graphical Access Challenge for People with Visual Impairments: Positions and Pathways Forward

    Get PDF
    Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this long-standing information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain

    Redesign of Johar: a framework for developing accessible applications

    Get PDF
    As the population of disabled people continues to grow, designing accessible applications is still a challenge, since most applications are incompatible with assistive technologies used by disabled people to interact with the computer. This accessibility issue is usually caused by the reluctance of software engineers or developers to include complete accessibility features in their applications, which in turn is often due to the extra cost and development effort required to dynamically adapt applications to a wide range of disabilities. Our aim to resolve accessibility issues led to the design and implementation of the Johar framework, which facilitates the development of applications accessible to both disabled and non-disabled users. In the Johar architectural model, the ability-based front-end user interfaces are called interface interpreters, while the application-specific logic or functionality implemented by application developers are called applications or apps. The seamless interaction between each interface interpreter and app is made possible by Johar. In this thesis, we assure the quality of Johar by detecting and resolving many inconsistencies, omissions, irrelevancies, and other anomalies that can trigger unexpected or abnormal behaviour in Johar, and/or alter the smooth operation of interface interpreters and apps. Our approach to conducting the quality assurance involved reviewing the two components of Johar, johar.gem and johar.idf, by critically examining the functionality of classes in each component, including how classes interrelate and how functions are allocated or distributed among the classes. We also performed an exhaustive comparative review of four documents - IDF Format Specification document, XML Schema Document or XSD, the Interface Interpreter Specification document, and the johar.idf package - which are vital to the smooth running of all interface interpreters and apps. We also developed an automated testing tool in order to determine whether all errors or violations in an IDF (Interface Description File) are detected and reported. As part of this thesis, we designed and implemented an interface interpreter, called Star that presents WIMP (Windows, Icons, Menus, and Pointers) graphical user interfaces to users, which is based on the new version of Johar. This new version evolved as a result of the redesign activities carried out on the Johar components and the various modifications effected during the quality assurance process. We also demonstrated the usage of Star on two apps to prove Johar’s ability to guarantee smooth interaction between interface interpreters and apps. Finally, in this thesis, we designed two other interface interpreters which will be implemented in the near future

    Dynamically generated multi-modal application interfaces

    Get PDF
    This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development

    Collaborative adaptive accessibility and human capabilities

    Get PDF
    This thesis discusses the challenges and opportunities facing the field of accessibility, particularly as computing becomes ubiquitous. It is argued that a new approach is needed that centres around adaptations (specific, atomic changes) to user interfaces and content in order to improve their accessibility for a wider range of people than targeted by present Assistive Technologies (ATs). Further, the approach must take into consideration the capabilities of people at the human level and facilitate collaboration, in planned and ad-hoc environments. There are two main areas of focus: (1) helping people experiencing minor-to-moderate, transient and potentially-overlapping impairments, as may be brought about by the ageing process and (2) supporting collaboration between people by reasoning about the consequences, from different users perspectives, of the adaptations they may require. A theoretical basis for describing these problems and a reasoning process for the semi-automatic application of adaptations is developed. Impairments caused by the environment in which a device is being used are considered. Adaptations are drawn from other research and industry artefacts. Mechanical testing is carried out on key areas of the reasoning process, demonstrating fitness for purpose. Several fundamental techniques to extend the reasoning process in order to take temporal factors (such as fluctuating user and device capabilities) into account are broadly described. These are proposed to be feasible, though inherently bring compromises (which are defined) in interaction stability and the needs of different actors (user, device, target level of accessibility). This technical work forms the basis of the contribution of one work-package of the Sustaining ICT use to promote autonomy (Sus-IT) project, under the New Dynamics of Ageing (NDA) programme of research in the UK. Test designs for larger-scale assessment of the system with real-world participants are given. The wider Sus-IT project provides social motivations and informed design decisions for this work and is carrying out longitudinal acceptance testing of the processes developed here

    Investigation of dynamic three-dimensional tangible touchscreens: Usability and feasibility

    Get PDF
    The ability for touchscreen controls to move from two physical dimensions to three dimensions may soon be possible. Though solutions exist for enhanced tactile touchscreen interaction using vibrotactile devices, no definitive commercial solution yet exists for providing real, physical shape to the virtual buttons on a touchscreen display. Of the many next steps in interface technology, this paper concentrates on the path leading to tangible, dynamic, touchscreen surfaces. An experiment was performed that explores the usage differences between a flat surface touchscreen and one augmented with raised surface controls. The results were mixed. The combination of tactile-visual modalities had a negative effect on task completion time when visual attention was focused on a single task (single target task time increased by 8% and the serial target task time increased by 6%). On the other hand, the dual modality had a positive effect on error rate when visual attention was divided between two tasks (the serial target error rate decreased by 50%). In addition to the experiment, this study also investigated the feasibility of creating a dynamic, three dimensional, tangible touchscreen. A new interface solution may be possible by inverting the traditional touchscreen architecture and integrating emerging technologies such as organic light emitting diode (OLED) displays and electrorheological fluid based tactile pins

    Crossmodal audio and tactile interaction with mobile touchscreens

    Get PDF
    Touchscreen mobile devices often use cut-down versions of desktop user interfaces placing high demands on the visual sense that may prove awkward in mobile settings. The research in this thesis addresses the problems encountered by situationally impaired mobile users by using crossmodal interaction to exploit the abundant similarities between the audio and tactile modalities. By making information available to both senses, users can receive the information in the most suitable way, without having to abandon their primary task to look at the device. This thesis begins with a literature review of related work followed by a definition of crossmodal icons. Two icons may be considered to be crossmodal if and only if they provide a common representation of data, which is accessible interchangeably via different modalities. Two experiments investigated possible parameters for use in crossmodal icons with results showing that rhythm, texture and spatial location are effective. A third experiment focused on learning multi-dimensional crossmodal icons and the extent to which this learning transfers between modalities. The results showed identification rates of 92% for three-dimensional audio crossmodal icons when trained in the tactile equivalents, and identification rates of 89% for tactile crossmodal icons when trained in the audio equivalent. Crossmodal icons were then incorporated into a mobile touchscreen QWERTY keyboard. Experiments showed that keyboards with audio or tactile feedback produce fewer errors and greater speeds of text entry compared to standard touchscreen keyboards. The next study examined how environmental variables affect user performance with the same keyboard. The data showed that each modality performs differently with varying levels of background noise or vibration and the exact levels at which these performance decreases occur were established. The final study involved a longitudinal evaluation of a touchscreen application, CrossTrainer, focusing on longitudinal effects on performance with audio and tactile feedback, the impact of context on performance and personal modality preference. The results show that crossmodal audio and tactile icons are a valid method of presenting information to situationally impaired mobile touchscreen users with recognitions rates of 100% over time. This thesis concludes with a set of guidelines on the design and application of crossmodal audio and tactile feedback to enable application and interface designers to employ such feedback in all systems
    corecore