515 research outputs found

    Designing and Evaluating Accessible E-Learning for Students with Visual Impairments in K-12 Computing Education

    Get PDF
    This dissertation explores the pathways for making K-12 computing education more accessible for blind or visually impaired (BVI) learners. As computer science (CS) expands into K-12 education, more concerted efforts are required to ensure all students have equitable access to opportunities to pursue a career in computing. To determine their viability with BVI learners, I conducted three studies to assess current accessibility in CS curricula, materials, and learning environments. Study one was interviews with visually impaired developers; study two was interviews with K-12 teachers of visually impaired students; study three was a remote observation within a computer science course. My exploration revealed that most of CS education lacks the necessary accommodations for BVI students to learn at an equitable pace with sighted students. However, electronic learning (e-learning) was a theme that showed to provide the most accessible learning experience for BVI students, although even there, usability and accessibility challenges were present in online learning platforms. My dissertation engaged in a human-centered approach across three studies towards designing, developing, and evaluating an online learning management system (LMS) with the critical design elements to improve navigation and interaction with BVI users. Study one was a survey exploring the perception of readiness for taking online courses between sighted and visually impaired students. The findings from the survey fueled study two, which employed participatory design with storytelling with K-12 teachers and BVI students to learn more about their experiences using LMSs and how they imagine such systems to be more accessible. The findings led to developing the accessible learning content management system (ALCMS), a web-based platform for managing courses, course content, and course roster, evaluated in study three with high school students, both sighted and visually impaired, to determine its usability and accessibility. This research contributes with recommendations for including features and design elements to improve accessibility in existing LMSs and building new ones

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    Convo: What does conversational programming need? An exploration of machine learning interface design

    Full text link
    Vast improvements in natural language understanding and speech recognition have paved the way for conversational interaction with computers. While conversational agents have often been used for short goal-oriented dialog, we know little about agents for developing computer programs. To explore the utility of natural language for programming, we conducted a study (nn=45) comparing different input methods to a conversational programming system we developed. Participants completed novice and advanced tasks using voice-based, text-based, and voice-or-text-based systems. We found that users appreciated aspects of each system (e.g., voice-input efficiency, text-input precision) and that novice users were more optimistic about programming using voice-input than advanced users. Our results show that future conversational programming tools should be tailored to users' programming experience and allow users to choose their preferred input mode. To reduce cognitive load, future interfaces can incorporate visualizations and possess custom natural language understanding and speech recognition models for programming.Comment: 9 pages, 7 figures, submitted to VL/HCC 2020, for associated user study video: https://youtu.be/TC5P3OO5ex

    MAIDR: Making Statistical Visualizations Accessible with Multimodal Data Representation

    Full text link
    This paper investigates new data exploration experiences that enable blind users to interact with statistical data visualizations-bar plots, heat maps, box plots, and scatter plots-leveraging multimodal data representations. In addition to sonification and textual descriptions that are commonly employed by existing accessible visualizations, our MAIDR (multimodal access and interactive data representation) system incorporates two additional modalities (braille and review) that offer complementary benefits. It also provides blind users with the autonomy and control to interactively access and understand data visualizations. In a user study involving 11 blind participants, we found the MAIDR system facilitated the accurate interpretation of statistical visualizations. Participants exhibited a range of strategies in combining multiple modalities, influenced by their past interactions and experiences with data visualizations. This work accentuates the overlooked potential of combining refreshable tactile representation with other modalities and elevates the discussion on the importance of user autonomy when designing accessible data visualizations.Comment: Accepted to CHI 2024. Source code is available at https://github.com/xability/maid

    Value-focused investigation into programming languages affinity

    Get PDF
    The search for better techniques to teach computer programming is paramount in order to improve the students' learning experiences. Several approaches have been proposed throughout the years, usually through technical solutions such as evaluation systems, digital classrooms, interactive lessons and so on. Personal factors, such as affinity, have been largely unexplored due to their qualitative and abstract nature. The results of a preliminary survey on how and why affinity is created between programmers and their favorite languages, conducted on a master’s degree class at Universidade do Minho, showed unexpected results as to which languages became favorites and the possible reasons for the students' choices. Aiming at further exploration on this topic and continuation of this research, the Value-Focused Thinking method was applied in order to construct a more complex, in-depth survey. This value-oriented method kept focus under control and even raised a handful of opportunities to improve the research as a whole. This paper describes the Value-Focused Thinking method and how it was applied to construct a new and deeper computer programming education survey to understand affinity with languages

    Value-focused investigation into programming languages affinity

    Get PDF
    The search for better techniques to teach computer programming is paramount in order to improve the students’ learning experiences. Several approaches have been proposed throughout the years, usually through technical solutions such as evaluation systems, digital classrooms, interactive lessons and so on. Personal factors, such as affinity, have been largely unexplored due to their qualitative and abstract nature. The results of a preliminary survey on how and why affinity is created between programmers and their favorite languages, conducted on a master’s degree class at Universidade do Minho, showed unexpected results as to which languages became favorites and the possible reasons for the students’ choices. Aiming at further exploration on this topic and continuation of this research, the Value-Focused Thinking method was applied in order to construct a more complex, in-depth survey. This value-oriented method kept focus under control and even raised a handful of opportunities to improve the research as a whole. This paper describes the Value-Focused Thinking method and how it was applied to construct a new and deeper computer programming education survey to understand affinity with languages.info:eu-repo/semantics/publishedVersio

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Program Comprehension Through Sonification

    Get PDF
    Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired
    corecore