9,322 research outputs found

    The Role of Sonification as a Code Navigation Aid: Improving Programming Structure Readability and Understandability For Non-Visual Users

    Get PDF
    Integrated Development Environments (IDEs) play an important role in the workflow of many software developers, e.g. providing syntactic highlighting or other navigation aids to support the creation of lengthy codebases. Unfortunately, such complex visual information is difficult to convey with current screen-reader technologies, thereby creating barriers for programmers who are blind, who are nevertheless using IDEs. This dissertation is focused on utilizing audio-based techniques to assist non-visual programmers when navigating through large amounts of code. Recently, audio generation techniques have seen major improvements in their capabilities to covey visually-based information to both sighted and non-visual users – making them a potential candidate for providing useful information, especially in places where information is visually structured. However, there is little known about the usability of such techniques in software development. Therefore, we investigated whether audio-based techniques capable of providing useful information about the code structure to assist non-visual programmers. The major contributions in this dissertation are split into two major parts: The first part of this dissertation explains our prior work that investigates the major challenges in software development faced by non-visual programmers, specifically code navigation difficulties. It also discusses areas of improvement where additional features could be developed in order to make the programming environment more accessible to non-visual programmers. The second part of this dissertation focuses on studies aimed to evaluate the usability and efficacy of audio-based techniques for conveying the structure of the programming codebase, which was suggested by the stakeholders in Part I. Specifically, we investigated various sound effects, audio parameters, and different interaction techniques to determine whether these techniques could provide adequate support to assist non-visual programmers when navigating through lengthy codebases. In Part II, we discussed the methodological aspects of evaluating the above-mentioned techniques with the stakeholders and examine these techniques using an audio-based prototype that was designed to control audio timing, locations, and methods of interaction. A set of design guidelines are provided based on the evaluation described previously to suggest including an auditory-based feedback system in the programming environment in efforts to improve code structure readability and understandability for assisting non-visual programmers

    Experimental study of acoustic displays of flight parameters in a simulated aerospace vehicle

    Get PDF
    Evaluating acoustic displays of target location in target detection and of flight parameters in simulated aerospace vehicle

    A Study on the Use of Listening Strategies and Listening Barriers of Chinese Training Institution Learners

    Get PDF
    This study analyzes the challenges faced by 90 English language learners enrolled in a Chinese educational institution. It compares high-level and low-level listeners and investigates strategies for students to improve their listening skills. Data was gathered through a questionnaire and an interview that prompted recollection. The initial method assesses students’ attention abilities, while the latter is self-reflective. In the listening phase, participants utilized sensory recall strategies and oral expressiveness to stimulate internal cognitive processes. The researcher also interviewed an instructor to gain insight into Chinese students’ listening difficulties and strategies from an educational standpoint. Hybrid methods enhanced research reliability. Quantitative data was analyzed using SPSS. Transcription, decoding, and qualitative analysis were conducted. Distinct strategies are employed by high- and low-level Chinese listeners, as revealed by the study. Listening problems exhibit significant variability. The study found that poor listening skills can hinder language proficiency, including the ability to articulate sounds, modulate pitch, and control tension. The primary objective of this study is to assess the incidence of listening comprehension strategies through a combination of qualitative and quantitative approaches. The study aims to investigate the listening comprehension of L2 learners from the viewpoints of both the learners and the instructor

    Program Comprehension Through Sonification

    Get PDF
    Background: Comprehension of computer programs is daunting, thanks in part to clutter in the software developer's visual environment and the need for frequent visual context changes. Non-speech sound has been shown to be useful in understanding the behavior of a program as it is running. Aims: This thesis explores whether using sound to help understand the static structure of programs is viable and advantageous. Method: A novel concept for program sonification is introduced. Non-speech sounds indicate characteristics of and relationships among a Java program's classes, interfaces, and methods. A sound mapping is incorporated into a prototype tool consisting of an extension to the Eclipse integrated development environment communicating with the sound engine Csound. Developers examining source code can aurally explore entities outside of the visual context. A rich body of sound techniques provides expanded representational possibilities. Two studies were conducted. In the first, software professionals participated in exploratory sessions to informally validate the sound mapping concept. The second study was a human-subjects experiment to discover whether using the tool and sound mapping improve performance of software comprehension tasks. Twenty-four software professionals and students performed maintenance-oriented tasks on two Java programs with and without sound. Results: Viability is strong for differentiation and characterization of software entities, less so for identification. The results show no overall advantage of using sound in terms of task duration at a 5% level of significance. The results do, however, suggest that sonification can be advantageous under certain conditions. Conclusions: The use of sound in program comprehension shows sufficient promise for continued research. Limitations of the present research include restriction to particular types of comprehension tasks, a single sound mapping, a single programming language, and limited training time. Future work includes experiments and case studies employing a wider set of comprehension tasks, sound mappings in domains other than software, and adding navigational capability for use by the visually impaired

    Central auditory processing disorder: a literature review on inter-disciplinary management, intervention, and implications for educators

    Get PDF
    Clinical Questions: What top-down and bottom-up interventions across the psychology, audiology, educational, and speech language pathology domains are most effective for children and adolescents with Central Auditory Processing Disorder (CAPD)? What considerations for planning research and intervention might be offered to a classroom teacher to further support students diagnosed with CAPD, especially in relation to the Multi-Tiered System of Supports (MTSS), formerly known as Response to Intervention (RTI)? Method: Inter-Disciplinary Literature Review Study Sources: PsycInfo, Linguistics and Language Behavior Abstracts, ProQuest, International Journal of Audiology, American-Speech-Language Hearing Association, Journal of Neurotherapy, Medline-Esbcohost, ERIC Ebscohost, Professional Development Collection Education, and What Works Clearinghouse Number of Included Studies: 16 Age Range: 2-13 years Primary Results: 1) Phonological awareness training was the primary reading educational construct found among the included interventions in this literature review. 2) Most CAPD studies employed a combination of both bottom-up and top-down treatments in intervention. This finding may possibly indicate that in order for a CAPD intervention to be even more beneficial to the student, both bottom-up and top-down treatments should be considered and incorporated in relation to the student\u27s individualized needs. Conclusions: Results confirmed very little research and few intervention implications on CAPD students within the educational research discipline, including special education. Search results primarily included methods to improve listening in the classroom environment, but did not specifically mention intervention in relation to CAPD and its implications. Results also confirmed that a multi-disciplinary effort is needed to provide clinical decision and effective intervention for the CAPD population

    Neural correlates of the processing of co-speech gestures

    Get PDF
    In communicative situations, speech is often accompanied by gestures. For example, speakers tend to illustrate certain contents of speech by means of iconic gestures which are hand movements that bear a formal relationship to the contents of speech. The meaning of an iconic gesture is determined both by its form as well as the speech context in which it is performed. Thus, gesture and speech interact in comprehension. Using fMRI, the present study investigated what brain areas are involved in this interaction process. Participants watched videos in which sentences containing an ambiguous word (e.g. She touched the mouse) were accompanied by either a meaningless grooming movement, a gesture supporting the more frequent dominant meaning (e.g. animal) or a gesture supporting the less frequent subordinate meaning (e.g. computer device). We hypothesized that brain areas involved in the interaction of gesture and speech would show greater activation to gesture-supported sentences as compared to sentences accompanied by a meaningless grooming movement. The main results are that when contrasted with grooming, both types of gestures (dominant and subordinate) activated an array of brain regions consisting of the left posterior superior temporal sulcus (STS), the inferior parietal lobule bilaterally and the ventral precentral sulcus bilaterally. Given the crucial role of the STS in audiovisual integration processes, this activation might reflect the interaction between the meaning of gesture and the ambiguous sentence. The activations in inferior frontal and inferior parietal regions may reflect a mechanism of determining the goal of co-speech hand movements through an observation-execution matching process

    Research on text comprehension in multimedia environments

    Get PDF
    corecore