66 research outputs found

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    Using Sonic Enhancement to Augment Non-Visual Tabular Navigation

    Get PDF
    More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested. Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264

    Accessible charts are part of the equation of accessible papers: a heuristic evaluation of the highest impact LIS Journals

    Get PDF
    Purpose Statistical charts are an essential source of information in academic papers. Charts have an important role in conveying, clarifying and simplifying the research results provided by the authors, but they present some accessibility barriers for people with low vision. This article aims to evaluate the accessibility of the statistical charts published in the library and information science (LIS) journals with the greatest impact factor. Design/methodology/approach A list of heuristic indicators developed by the authors has been used to assess the accessibility of statistical charts for people with low vision. The heuristics have been applied to a sample of charts from 2019 issues of ten LIS journals with the highest impact factor according to the ranking of the JCR. Findings The current practices of image submission do not follow the basic recommended guidelines on accessibility like color contrast or the use of textual alternatives. On the 2 other hand, some incongruities between the technical suggestions of image submission and their application in analyzed charts also emerged. The main problems identified are: poor text alternatives, insufficient contrast ratio between adjacent colors, and the inexistence of customization options. Authoring tools do not help authors to fulfill these requirements. Research limitations The sample is not very extensive; nonetheless, it is representative of common practices and the most frequent accessibility problems in this context. Social implications The heuristics proposed are a good starting point to generate guidelines for authors when preparing their papers for publication and to guide journal publishers in creating accessible documents. Low vision users, a highly prevalent condition, will benefit from the improvements. Originality/value The results of this research provide key insights into low vision accessibility barriers, not considered in previous literature and can be a starting point for their solution.This research has been done in the framework of the PhD Programme in Engineering and Information Technology of the Universitat de Lleida (UdL). This work has been partially supported by the Spanish project PID2019-105093GB-I00 (MINECO/FEDER, UE) and CERCA Programme/Generalitat de Catalunya

    Neuroplasticity, neural reuse, and the language module

    Get PDF
    What conception of mental architecture can survive the evidence of neuroplasticity and neural reuse in the human brain? In particular, what sorts of modules are compatible with this evidence? I aim to show how developmental and adult neuroplasticity, as well as evidence of pervasive neural reuse, forces us to revise the standard conception of modularity and spells the end of a hardwired and dedicated language module. I argue from principles of both neural reuse and neural redundancy that language is facilitated by a composite of modules (or module-like entities), few if any of which are likely to be linguistically special, and that neuroplasticity provides evidence that (in key respects and to an appreciable extent) few if any of them ought to be considered developmentally robust, though their development does seem to be constrained by features intrinsic to particular regions of cortex (manifesting as domain-specific predispositions or acquisition biases). In the course of doing so I articulate a schematically and neurobiologically precise framework for understanding modules and their supramodular interactions

    Multisensory self-motion processing in humans

    Get PDF
    Humans obtain and process sensory information from various modalities to ensure successful navigation through the environment. While visual, vestibular, and auditory self-motion perception have been extensively investigated, studies on tac-tile self-motion perception are comparably rare. In my thesis, I have investigated tactile self-motion perception and its interaction with the visual modality. In one of two behavioral studies, I analyzed the influence of a tactile heading stimulus intro-duced as a distractor on visual heading perception. In the second behavioral study, I analyzed visuo-tactile perception of self-motion direction (heading). In both studies, visual self-motion was simulated as forward motion over a 2D ground plane. Tactile self-motion was simulated by airflow towards the subjects’ forehead, mimicking the experience of travel wind, e.g., during a bike ride. In the analysis of the subjects’ perceptual reports, I focused on possible visuo-tactile interactions and applied dif-ferent models to describe the integration of visuo-tactile heading stimuli. Lastly, in a functional magnetic resonance imaging study (fMRI), I investigated neural correlates of visual and tactile perception of traveled distance (path integration) and its modu-lation by prediction and cognitive task demands. In my first behavioral study, subjects indicated perceived heading from uni-modal visual (optic flow), unimodal tactile (tactile flow) or from a combination of stimuli from both modalities, simulating either congruent or incongruent heading (bimodal condition). In the bimodal condition, the subjects’ task was to indicate visually perceived heading. Hence, here tactile stimuli were behaviorally irrelevant. In bimodal trials, I found a significant interaction of stimuli from both modalities. Visually perceived heading was biased towards tactile heading direction for an offset of up to 10° between both heading directions. The relative weighting of stimuli from both modalities in the visuo-tactile in-teraction were examined in my second behavioral study. Subjects indicated per-ceived heading from unimodal visual, unimodal tactile and bimodal trials. Here, in bimodal trials, stimuli form both modalities were presented as behaviorally rele-vant. By varying eye- relative to head position during stimulus presentation, possi-ble influences of different reference frames of the visual and tactile modality were investigated. In different sensory modalities, incoming information is encoded rela-tive to the reference system of the receiving sensory organ (e.g., relative to the reti-na in vision or relative to the skin in somatosensation). In unimodal tactile trials, heading perception was shifted towards eye-position. In bimodal trials, varying head- and eye-position had no significant effect on perceived heading: subjects indicated perceived heading based on both, the vis-ual and tactile stimulus, independently of the behavioral relevance of the tactile stimulus. In sum, results of both studies suggest that the tactile modality plays a greater role in self-motion perception than previously thought. Besides the perception of travel direction (heading), information about trav-eled speed and duration are integrated to achieve a measure of the distance trav-eled (path integration). One previous behavioral study has shown that tactile flow can be used for the reproduction of travel distance (Churan et al., 2017). However, studies on neural correlates of tactile distance encoding in humans are lacking en-tirely. In my third study, subjects solved two path integration tasks from unimodal visual and unimodal tactile self-motion stimuli. Brain activity was measured by means of functional magnetic resonance imaging (fMRI). Both tasks varied in the engagement of cognitive task demands. In the first task, subjects replicated (Active trial) a previously observed traveled distance (Passive trial) (= Reproduction task). In the second task, subjects traveled a self-chosen distance (Active trial) which was then recorded and played back to them (Passive trial) (= Self task). The predictive coding theory postulates an internal model which creates predictions about sensory outcomes-based mismatches between predictions and sensory input which enables the system to sharpen future predictions (Teufel et al., 2018). Recent studies sug-gested a synergistical interaction between prediction and cognitive demands, there-by reversing the attenuating effect of prediction. In my study, this hypothesis was tested by manipulating cognitive demands between both tasks. For both tasks, Ac-tive trials compared to Passive trials showed BOLD enhancement of early sensory cortices and suppression of higher order areas (e.g., the intraparietal lobule (IPL)). For both modalities, enhancement of early sensory areas might facilitate task solv-ing processes at hand, thereby reversing the hypothesized attenuating effect of pre-diction. Suppression of the IPL indicates this area as an amodal comparator of pre-dictions and incoming self-motion signals. In conclusion, I was able to show that tactile self-motion information, i.e., tactile flow, provides significant information for the processing of two key features of self-motion perception: Heading and path integration. Neural correlates of tactile path-integration were investigated by means of fMRI, showing similarities between visual and tactile path integration on early processing stages as well as shared neu-ral substrates in higher order areas located in the IPL. Future studies should further investigate the perception of different self-motion parameters in the tactile modali-ty to extend the understanding of this less researched – but important – modality

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Spatial Auditory Maps for Blind Travellers

    Get PDF
    Empirical research shows that blind persons who have the ability and opportunity to access geographic map information tactually, benefit in their mobility. Unfortunately, tangible maps are not found in large numbers. Economics is the leading explanation: tangible maps are expensive to build, duplicate and distribute. SAM, short for Spatial Auditory Map, is a prototype created to address the unavail- ability of tangible maps. SAM presents geographic information to a blind person encoded in sound. A blind person receives maps electronically and accesses them using a small in- expensive digitalizing tablet connected to a PC. The interface provides location-dependent sound as a stylus is manipulated by the user, plus a schematic visual representation for users with residual vision. The assessment of SAM on a group of blind participants suggests that blind users can learn unknown environments as complex as the ones represented by tactile maps - in the same amount of reading time. This research opens new avenues in visualization techniques, promotes alternative communication methods, and proposes a human-computer interaction framework for conveying map information to a blind person

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones

    Spatial Displays and Spatial Instruments

    Get PDF
    The conference proceedings topics are divided into two main areas: (1) issues of spatial and picture perception raised by graphical electronic displays of spatial information; and (2) design questions raised by the practical experience of designers actually defining new spatial instruments for use in new aircraft and spacecraft. Each topic is considered from both a theoretical and an applied direction. Emphasis is placed on discussion of phenomena and determination of design principles
    • 

    corecore