12 research outputs found

    Non-visual overviews of complex data sets

    Get PDF
    This paper describes the design and preliminary testing of an interface to obtain overview information from complex numerical data tables non-visually, which is something that cannot be done with currently available accessibility tools for the blind and visually impaired users. A sonification technique that hides detail in the data and highlights its main features without doing any computations to the data, is combined with a graphics tablet for focus+context interactive navigation, in an interface called TableVis. Results from its evaluation suggest that this technique can deliver better scores than speech in time to answer overview questions, correctness of the answers and subjective workload

    Investigating Perceptual Congruence Between Data and Display Dimensions in Sonification

    Get PDF
    The relationships between sounds and their perceived meaning and connotations are complex, making auditory perception an important factor to consider when designing sonification systems. Listeners often have a mental model of how a data variable should sound during sonification and this model is not considered in most data:sound mappings. This can lead to mappings that are difficult to use and can cause confusion. To investigate this issue, we conducted a magnitude estimation experiment to map how roughness, noise and pitch relate to the perceived magnitude of stress, error and danger. These parameters were chosen due to previous findings which suggest perceptual congruency between these auditory sensations and conceptual variables. Results from this experiment show that polarity and scaling preference are dependent on the data:sound mapping. This work provides polarity and scaling values that may be directly utilised by sonification designers to improve auditory displays in areas such as accessible and mobile computing, process-monitoring and biofeedback

    Evaluation of Psychoacoustic Sound Parameters for Sonification

    Get PDF
    Sonification designers have little theory or experimental evidence to guide the design of data-to-sound mappings. Many mappings use acoustic representations of data values which do not correspond with the listener's perception of how that data value should sound during sonification. This research evaluates data-to-sound mappings that are based on psychoacoustic sensations, in an attempt to move towards using data-to-sound mappings that are aligned with the listener's perception of the data value's auditory connotations. Multiple psychoacoustic parameters were evaluated over two experiments, which were designed in the context of a domain-specific problem - detecting the level of focus of an astronomical image through auditory display. Recommendations for designing sonification systems with psychoacoustic sound parameters are presented based on our results

    Developing an interactive overview for non-visual exploration of tabular numerical information

    Get PDF
    This thesis investigates the problem of obtaining overview information from complex tabular numerical data sets non-visually. Blind and visually impaired people need to access and analyse numerical data, both in education and in professional occupations. Obtaining an overview is a necessary first step in data analysis, for which current non-visual data accessibility methods offer little support. This thesis describes a new interactive parametric sonification technique called High-Density Sonification (HDS), which facilitates the process of extracting overview information from the data easily and efficiently by rendering multiple data points as single auditory events. Beyond obtaining an overview of the data, experimental studies showed that the capabilities of human auditory perception and cognition to extract meaning from HDS representations could be used to reliably estimate relative arithmetic mean values within large tabular data sets. Following a user-centred design methodology, HDS was implemented as the primary form of overview information display in a multimodal interface called TableVis. This interface supports the active process of interactive data exploration non-visually, making use of proprioception to maintain contextual information during exploration (non-visual focus+context), vibrotactile data annotations (EMA-Tactons) that can be used as external memory aids to prevent high mental workload levels, and speech synthesis to access detailed information on demand. A series of empirical studies was conducted to quantify the performance attained in the exploration of tabular data sets for overview information using TableVis. This was done by comparing HDS with the main current non-visual accessibility technique (speech synthesis), and by quantifying the effect of different sizes of data sets on user performance, which showed that HDS resulted in better performance than speech, and that this performance was not heavily dependent on the size of the data set. In addition, levels of subjective workload during exploration tasks using TableVis were investigated, resulting in the proposal of EMA-Tactons, vibrotactile annotations that the user can add to the data in order to prevent working memory saturation in the most demanding data exploration scenarios. An experimental evaluation found that EMA-Tactons significantly reduced mental workload in data exploration tasks. Thus, the work described in this thesis provides a basis for the interactive non-visual exploration of a broad range of sizes of numerical data tables by offering techniques to extract overview information quickly, performing perceptual estimations of data descriptors (relative arithmetic mean) and managing demands on mental workload through vibrotactile data annotations, while seamlessly linking with explorations at different levels of detail and preserving spatial data representation metaphors to support collaboration with sighted users

    Investigating perceptual congruence between information and sensory parameters in auditory and vibrotactile displays

    Get PDF
    A fundamental interaction between a computer and its user(s) is the transmission of information between the two and there are many situations where it is necessary for this interaction to occur non-visually, such as using sound or vibration. To design successful interactions in these modalities, it is necessary to understand how users perceive mappings between information and acoustic or vibration parameters, so that these parameters can be designed such that they are perceived as congruent. This thesis investigates several data-sound and data-vibration mappings by using psychophysical scaling to understand how users perceive the mappings. It also investigates the impact that using these methods during design has when they are integrated into an auditory or vibrotactile display. To investigate acoustic parameters that may provide more perceptually congruent data-sound mappings, Experiments 1 and 2 explored several psychoacoustic parameters for use in a mapping. These studies found that applying amplitude modulation — or roughness — to a signal, or applying broadband noise to it resulted in performance which were similar to conducting the task visually. Experiments 3 and 4 used scaling methods to map how a user perceived a change in an information parameter, for a given change in an acoustic or vibrotactile parameter. Experiment 3 showed that increases in acoustic parameters that are generally considered undesirable in music were perceived as congruent with information parameters with negative valence such as stress or danger. Experiment 4 found that data-vibration mappings were more generalised — a given increase in a vibrotactile parameter was almost always perceived as an increase in an information parameter — regardless of the valence of the information parameter. Experiments 5 and 6 investigated the impact that using results from the scaling methods used in Experiments 3 and 4 had on users' performance when using an auditory or vibrotactile display. These experiments also explored the impact that the complexity of the context which the display was placed had on user performance. These studies found that using mappings based on scaling results did not significantly impact user's performance with a simple auditory display, but it did reduce response times in a more complex use-case

    Overviews and their effect on interaction in the auditory interface.

    Get PDF
    PhDAuditory overviews have the potential to improve the quality of auditory interfaces. However, in order to apply overviews well, we must understand them. Specifically, what are they and what is their impact? This thesis presents six characteristics that overviews should have. They should be a structured representation of the detailed information, define the scope of the material, guide the user, show context and patterns in the data, encourage exploration of the detail and represent the current state of the data. These characteristics are guided by a systematic review of visual overview research, analysis of established visual overviews and evaluation of how these characteristics fit current auditory overviews. The second half of the thesis evaluates how the addition of an overview impacts user interaction. While the overviews do not improve performance, they do change the navigation patterns from one of data exploration and discovery to guided and directed information seeking. With these two contributions, we gain a better understanding of how overviews work in an auditory interface and how they might be exploited more effectively

    Actas da 2ª Conferência Nacional em Interacção Pessoa-Máquina

    Get PDF
    Actas da 2ª Conferência Nacional em Interacção Pessoa-Máquina, Braga, 16-18 Outubro de 2006Depois do sucesso da primeira edição, organizada em Julho de 2004 na Faculdade de Ciências da Universidade de Lisboa, organizou-se em 2006 a Interacç˜ão 2006 – 2a. Conferência Nacional em Interacç˜ao Pessoa-Máquina – numa iniciativa conjunta do Grupo Português de Computação Gráfica e do Departamento de Informática/Centro de Ciências da Computação da Universidade do Minho. Tal como na sua primeira edição˜ a Interacção 2006 visou promover um ponto de encontro da comunidade interessada na Interacção Pessoa-Máquina em Portugal. Reunindo investigadores, docentes e profissionais, permitiu a divulgação de trabalhos e a troca de experiências entre as comunidades académica e industrial

    Understanding and Supporting Cross-modal Collaborative Information Seeking

    Get PDF
    Most previous studies of users with visual impairments (VI) access to the web have focused solely on single user human-web interaction. This thesis explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is the challenges and opportunities that exist in supporting visually impaired users to take an effective part in collaborative web search tasks with sighted peers. The thesis examines the overall question of what happens currently when people perform CCIS, and how might the CCIS process be improved? To motivate the work, we conducted a survey, the results of which showed that a significant amount of CCIS activity goes on. An exploratory study was conducted to investigate the challenges faced and behaviour patterns that occur when people perform CCIS. We observed 14 pairs of VI and sighted users in both co-located and distributed settings. In this study participants used their tools of choice, that is the web browser, note taker and preferred communications system. The study examines how concepts from the “mainstream” collaborative Information Seeking (CIS) literature, play out in the context of cross-modality. Based on the findings of this study, we produced design recommendations for features that can better support cross-modal collaborative search. Following this, we surveyed mainstream CIS systems and selected the most accessible software package that satisfied the design recommendations from the initial study. Due to the fact that the software was not built with accessibility in mind, we developed JAWS scripts and employed other JAWS features to improve its accessibility and VI user experience. We then performed a second study, using the same participants undertaking search tasks of a similar complexity as before, but this time using the CIS system. The aim of this study was to explore the impact on the CCIS process when introducing a mainstream CIS system, enhanced for accessibility. In this study we looked into CCIS from two perspectives: the collaboration and the individual interaction with the interface. The findings from this study provide an understanding of the process of CCIS when using a system that supports it. These findings assisted us in formulating a set of guidelines toward supporting collaborative search in a cross-modal context
    corecore