270 research outputs found

    Tangible Web Layout Design For Blind And Visually Impaired People

    Get PDF
    Although past research has enabled blind and visually impaired (BVI) developers to access information better and code more efficiently, they still lack accessible ways to create visual layouts. This research highlights the potential of using a tangible user interface (TUI) to enable BVI people to design web layouts, and presents Sparsha, a novel TUI for layout design. I conducted a semi-structured interview and a co-design session with a blind participant. Based on the elicited insights and designs, I implemented Sparsha using 3D-printed tactile elements, an Arduino-powered sensing circuit and a Web server that renders the final HTML layout. Users place tactile beads on a base to represent HTML elements on the screen. The Arduino senses the type and location of these beads and sends it to a web server which renders the correct HTML element in the correct location on the client browser, and provides audio feedback

    Understanding and Supporting Cross-modal Collaborative Information Seeking

    Get PDF
    Most previous studies of users with visual impairments (VI) access to the web have focused solely on single user human-web interaction. This thesis explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is the challenges and opportunities that exist in supporting visually impaired users to take an effective part in collaborative web search tasks with sighted peers. The thesis examines the overall question of what happens currently when people perform CCIS, and how might the CCIS process be improved? To motivate the work, we conducted a survey, the results of which showed that a significant amount of CCIS activity goes on. An exploratory study was conducted to investigate the challenges faced and behaviour patterns that occur when people perform CCIS. We observed 14 pairs of VI and sighted users in both co-located and distributed settings. In this study participants used their tools of choice, that is the web browser, note taker and preferred communications system. The study examines how concepts from the “mainstream” collaborative Information Seeking (CIS) literature, play out in the context of cross-modality. Based on the findings of this study, we produced design recommendations for features that can better support cross-modal collaborative search. Following this, we surveyed mainstream CIS systems and selected the most accessible software package that satisfied the design recommendations from the initial study. Due to the fact that the software was not built with accessibility in mind, we developed JAWS scripts and employed other JAWS features to improve its accessibility and VI user experience. We then performed a second study, using the same participants undertaking search tasks of a similar complexity as before, but this time using the CIS system. The aim of this study was to explore the impact on the CCIS process when introducing a mainstream CIS system, enhanced for accessibility. In this study we looked into CCIS from two perspectives: the collaboration and the individual interaction with the interface. The findings from this study provide an understanding of the process of CCIS when using a system that supports it. These findings assisted us in formulating a set of guidelines toward supporting collaborative search in a cross-modal context

    Designing Search User Interfaces for Visually Impaired Searchers: A User-centred Approach

    Get PDF
    PhDThe Web has been a blessing for visually impaired users as with the help of assistive technologies such as screen readers, they can access previously inaccessible information independently. However, for screen reader users, web-based information seeking can still be challenging as web pages are mainly designed for visual interaction. This affects visually impaired users’ perception of theWeb as an information space as well as their experience of search interfaces. The aim of this thesis is therefore to consider visually impaired users’ information seeking behaviour, abilities and interactions via screen readers in the design of a search interface to support complex information seeking. We first conduct a review of how visually impaired users navigate the Web using screen readers. We highlight the strategies employed, the challenges encountered and the solutions to enhance web navigation through screen readers. We then investigate the information seeking behaviour of visually impaired users on the Web through an observational study and we compare this behaviour to that of sighted users to examine the impact of screen reader interaction on the information seeking process. To engage visually impaired users in the design process, we propose and evaluate a novel participatory approach based on a narrative scenario and a dialogue-led interaction to verify user requirements and to brainstorm design ideas. The development of the search interface is informed by the requirements gathered from the observational study and is supported through the inclusion of visually impaired users in the design process. We implement and evaluate the proposed search interface with novel features to support visually impaired users for complex information seeking. This thesis shows that considerations for information seeking behaviour and users’ abilities and mode of interaction contribute significantly to the design of search user interfaces to ensure that interface components are accessible as well as usable

    Investigating retrospective interoperability between the accessible and mobile webs with regard to user input

    Get PDF
    The World Wide Web (Web) has become a key technology to provide access to on-line information. The Mobile Web users, who access the Web using small devices such as mobile phones and Personal Digital Assistants (PDAs), make errors on entering text and controlling cursors. These errors are caused by both the characteristics of a device and the environment in which it is used, and are called situational impairments. Disabled Web users, on the other hand, have difficulties in accessing the Web due to their impairments in visual, hearing or motor abilities. We assert that errors experienced by the Mobile Web users share similarity in scope with those hindering motor-impaired Web users with dexterity issues, and existing solutions from the motor-impaired users domain can be migrated to the Mobile Web domain to address the common errors.Results of a systematic literature survey have revealed 12 error types that affect both the Mobile Web users and disabled Web users. These errors range from unable to locate a key to unable to pin-point a cursor. User experiments have confirmed that the Mobile Web users and motor-impaired Web users share errors in scope: they both miss key presses, press additional keys, unintentionally press a key more than once or press a key too long. In addition, both small device users and motor-impaired desktop users have difficulties in performing clicking, multiple clicking and drag selecting. Furthermore, when small device users are moving, both the scope and the magnitude of the errors are shared. In order to address these errors, we have migrated existing solutions from the disabled Web users domain into the Mobile Web users domain. We have developed a typing error correction system for the Mobile Web users. Results of the user evaluation have indicated that the proposed system can significantly reduce the error rates of the Mobile Web users.This work has an important contribution to both the Web accessibility field and the Mobile Web field. By leveraging research from the Web accessibility field into the Mobile Web field, we have linked two disjoint domains together. We have migrated solutions from one domain to another, and thus have improved the usability and accessibility of the Mobile Web.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Examining Techniques for Equivalent Access of Web User Interfaces for Blind and Low Vision People

    Get PDF
    Making the Web equivalently accessible to blind and low vision (BLV) users remains a major challenge. While assistive technologies such as screen readers have enabled users to better interact with websites over the years, desktop web users still receive information about the contents of a page in a linear manner, which makes it hard to understand visual paradigms such as layout. In this work, I explore the benefits and drawbacks of incorporating spatial interactions into desktop screen readers — interactions such as navigating directionally (vs. semantically) and hearing content via spatial audio — with a focus on both consuming and producing web content. I discuss how to effectively use spatial interactions by observing user preferences with our system. This research also reveals opportunities that BLV users envision for incorporating spatial interaction into applications beyond the web, including mapping services (e.g., Apple Maps), STEM education, and live presentations. I conclude this body of work by proposing a new phase of research for improving BLV users’ access to web user interface concepts when collaborating with sighted users. The goal of this future research is to improve sighted users’ communication tactics when collaborating with BLV users on web applications by educating the sighted users about screen reader semantics

    GenAssist: Making Image Generation Accessible

    Full text link
    Blind and low vision (BLV) creators use images to communicate with sighted audiences. However, creating or retrieving images is challenging for BLV creators as it is difficult to use authoring tools or assess image search results. Thus, creators limit the types of images they create or recruit sighted collaborators. While text-to-image generation models let creators generate high-fidelity images based on a text description (i.e. prompt), it is difficult to assess the content and quality of generated images. We present GenAssist, a system to make text-to-image generation accessible. Using our interface, creators can verify whether generated image candidates followed the prompt, access additional details in the image not specified in the prompt, and skim a summary of similarities and differences between image candidates. To power the interface, GenAssist uses a large language model to generate visual questions, vision-language models to extract answers, and a large language model to summarize the results. Our study with 12 BLV creators demonstrated that GenAssist enables and simplifies the process of image selection and generation, making visual authoring more accessible to all.Comment: For accessibility tagged pdf, please refer to the ancillary fil

    Making Stock Market Charts Accessible through Provision of Textual Information in a Common Interface

    Get PDF
    While sophisticated interactive charts provide a host of advantages to the majority of stock market investors, they also create a significant barrier to individuals with visual impairments. This paper describes exploration and usability testing of three proposed alternative accessibility solutions aimed at improving the accessibility and usability of stock market charts for visually impaired screen reader users. The findings revealed that although a dropdown menu solution was favoured over an auditory and text input solution, users prefer having as many options as possible; they would rather choose appropriate solutions according to their personal preferences and the task they wish to accomplish. In conclusion, a one-size-fits-all model is not ideal in meeting diverse users’ needs within the widest context possible. Providing options while enabling users to personalize the interface through flexible configurations is indeed the ultimate goal of a design

    Dynamically generated multi-modal application interfaces

    Get PDF
    This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development

    Software Usability

    Get PDF
    This volume delivers a collection of high-quality contributions to help broaden developers’ and non-developers’ minds alike when it comes to considering software usability. It presents novel research and experiences and disseminates new ideas accessible to people who might not be software makers but who are undoubtedly software users
    • 

    corecore