237 research outputs found

    Tac-tiles: multimodal pie charts for visually impaired users

    Get PDF
    Tac-tiles is an accessible interface that allows visually impaired users to browse graphical information using tactile and audio feedback. The system uses a graphics tablet which is augmented with a tangible overlay tile to guide user exploration. Dynamic feedback is provided by a tactile pin-array at the fingertips, and through speech/non-speech audio cues. In designing the system, we seek to preserve the affordances and metaphors of traditional, low-tech teaching media for the blind, and combine this with the benefits of a digital representation. Traditional tangible media allow rapid, non-sequential access to data, promote easy and unambiguous access to resources such as axes and gridlines, allow the use of external memory, and preserve visual conventions, thus promoting collaboration with sighted colleagues. A prototype system was evaluated with visually impaired users, and recommendations for multimodal design were derived

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    Investigating the generalizability of EEG-based Cognitive Load Estimation Across Visualizations

    Full text link
    We examine if EEG-based cognitive load (CL) estimation is generalizable across the character, spatial pattern, bar graph and pie chart-based visualizations for the nback~task. CL is estimated via two recent approaches: (a) Deep convolutional neural network, and (b) Proximal support vector machines. Experiments reveal that CL estimation suffers across visualizations motivating the need for effective machine learning techniques to benchmark visual interface usability for a given analytic task

    SeeChart: Enabling Accessible Visualizations Through Interactive Natural Language Interface For People with Visual Impairments

    Full text link
    Web-based data visualizations have become very popular for exploring data and communicating insights. Newspapers, journals, and reports regularly publish visualizations to tell compelling stories with data. Unfortunately, most visualizations are inaccessible to readers with visual impairments. For many charts on the web, there are no accompanying alternative (alt) texts, and even if such texts exist they do not adequately describe important insights from charts. To address the problem, we first interviewed 15 blind users to understand their challenges and requirements for reading data visualizations. Based on the insights from these interviews, we developed SeeChart, an interactive tool that automatically deconstructs charts from web pages and then converts them to accessible visualizations for blind people by enabling them to hear the chart summary as well as to interact through data points using the keyboard. Our evaluation with 14 blind participants suggests the efficacy of SeeChart in understanding key insights from charts and fulfilling their information needs while reducing their required time and cognitive burden.Comment: 28 pages, 13 figure

    Understanding Visualization: A formal approach using category theory and semiotics

    Get PDF
    This article combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This article generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not

    Non-Visual Representation of Complex Documents for Use in Digital Talking Books

    Get PDF
    Essential written information such as text books, bills, and catalogues needs to be accessible by everyone. However, access is not always available to vision-impaired people. As they require electronic documents to be available in specific formats. In order to address the accessibility issues of electronic documents, this research aims to design an affordable, portable, standalone and simple to use complete reading system that will convert and describe complex components in electronic documents to print disabled users

    MAIDR: Making Statistical Visualizations Accessible with Multimodal Data Representation

    Full text link
    This paper investigates new data exploration experiences that enable blind users to interact with statistical data visualizations-bar plots, heat maps, box plots, and scatter plots-leveraging multimodal data representations. In addition to sonification and textual descriptions that are commonly employed by existing accessible visualizations, our MAIDR (multimodal access and interactive data representation) system incorporates two additional modalities (braille and review) that offer complementary benefits. It also provides blind users with the autonomy and control to interactively access and understand data visualizations. In a user study involving 11 blind participants, we found the MAIDR system facilitated the accurate interpretation of statistical visualizations. Participants exhibited a range of strategies in combining multiple modalities, influenced by their past interactions and experiences with data visualizations. This work accentuates the overlooked potential of combining refreshable tactile representation with other modalities and elevates the discussion on the importance of user autonomy when designing accessible data visualizations.Comment: Accepted to CHI 2024. Source code is available at https://github.com/xability/maid

    Android Implementation of a Visualisation, Sonification and AI-Assisted Interpretation of Neonatal EEG

    Get PDF
    Development of deep neural network models for detection of neonatal seizures. Implementation of the detection system as an Android application.The aim of this project is the implementation of an Android App to help healthcare professionals to check newborn health status by observing neonatal EEG signals, without having extensive training in EEG interpretation. To satisfy that aim, this project is divided in three blocks: AI-assisted neonatal EEG interpretation, EEG sonification and graphical user interface. The AI-assisted block has the function to detect neonatal seizures using a fully- convolutional deep neural network using the offline-trained existing model. The sonification work consisted of the adaptation of a previously developed algorithm, based on the phase vocoder, which was already implemented by another UPC student in the Android environment. The developed application core provides both sonification and AI detection functionalities, which are integrated in a user friendly graphical user interface.El objetivo de este proyecto era la implementación de una aplicación Android para ayudar a profesionales del ámbito médico a comprobar el estado de salud de neonatos en base a la observación del electroencefalograma (EEG), sin necesidad de tener mucha experiencia en el campo de la neonatología. Para cumplir dicho objetivo, el proyecto se ha dividido en tres bloques: interpretación asistida por IA, sonificación y interfaz de usuario gráfica. El bloque de IA se encarga de la detección de epilepsias en recién nacidos utilizando una red neuronal totalmente convolucional implementada en Android llevando a cabo la adaptación de un modelo ya existente en Python. El trabajo de sonificación del EEG ha consistido en la adaptación de un algoritmo basado en Phase Vocoder realizado por otro estudiante de la UPC La finalidad de la interfaz gráfica es mostrar de forma integrada la información recibida de la sonificación y la red neuronal para que el usuario pueda interpretarlas con facilidad, de forma que la aplicación resulte útil a un gran número de usuarios.L'objectiu d'aquest projecte era la implementació d'una aplicació Android per ajudar a professionals de l'àmbit mèdic a comprovar l'estat de salut de nounats en base a l'observació de l'electroencefalograma (EEG), sense necessitat de tenir molta experiència en neonatologia. Per tal d'acomplir aquest objectiu, el projecte s'ha dividit en tres blocs: interpretació assistida per IA, sonificació i interfície d'usuari gràfica. El bloc d'IA s'encarrega de la detecció d'epilèpsies en nadons utilitzant una xarxa neuronal totalment convolucional implementada en Android duent a terme l'adaptació d'un model ja existent programat en Python. El treball de sonificació de l'EEG ha consistit en l'adaptació d'un algoritme basat en Phase Vocoder realitzat per un altre estudiant de la UPC La finalitat de la interfície gràfica és mostrar de forma integrada la informació rebuda de la sonificació i la xarxa neuronal perquè l'usuari pugui interpretar-les amb facilitat, de manera que l'aplicació resulti útil a un gran nombre d'usuaris
    corecore