PhD thesisThe increasingly broad spectrum of interaction contexts has pushed back the limitations
of the Graphical User Interface. However, while the benefits of multimodal computing are
increasingly to be found, visually impaired users remain faced with many challenges that
prevent them from fully exploiting the benefits of graphical information. This thesis aims
to contribute to the research area of accessible graphical information, and to propose a
methodological framework for improving multimodal graph interaction.
The experiments described in this thesis employ mobile “tablet” devices, as these are
an already well established tool within education, and their form factor appears to be well
suited to undertaking tasks represented on a surface which is both accessible and provides
sufficient space to afford a fair degree of graphical resolution. Three central questions
examined in this thesis are as follows:
1) How accurately can visually impaired users estimate the values of data points
rendered in auditory graphs presented on a mobile device?
2) Are there modes of interaction which can improve the ability of visually impaired
people to perform point estimation tasks presented on a mobile device?
3) What format should the auditory display take to enable accurate understanding and
efficient processing of auditory graphs?
An analysis of point estimation errors and the correlation between the predicted and
actual data points was used to examine the first question. The way in which RMSEs and
correlation values vary, generally worsening, as the numbers of data points in the
presented auditory graphs are increased is described in detail.
Multi touch gestures are then investigated as an alternative approach to passive
listening as a means of making point estimation tasks more active and engaging, which in
turn might lead to improved performance (question 2). The investigation showed that the
additional touch modality enabled visually impaired users to perform point estimation
tasks with higher correlations with actual values and lower point estimation errors. The
analysis reveals that combining audio playback with user interaction offers an advantage
over auditory graph presentation requiring only passive listening. In the final two studies
of the thesis, we examine different approaches to the presentation of Y coordinates in
auditory graphs (question 3), including the representation of negative numbers. These
studies involved both normally sighted and visually impaired users, as there are
applications where normally sighted users might employ auditory graphs, such as the
unseen monitoring of stocks, or fuel consumption in a car.
A mixed methods approach was employed combining quantitative statistics with
qualitative data from interviews and informal feedback to form a rounded picture of the
results of the studies. The experiments employed tablet-based prototypes and data was
captured primarily using audio recordings, notes on a laptop and digital timing data.
Participants were recruited appropriately from the visually impaired and normally sighted
populations, and were mostly resident either in London or Jakarta.
Multi-reference sonification schemes are investigated as a means of improving the
performance of mobile non-visual point estimation tasks. The results showed that both
populations are able to carry out point estimation tasks with a good level of performance
when presented with auditory graphs using multiple reference tones. Additionally, visually
impaired participants performed better on graphs represented in this format than normally
sighted participants. This work contributes to the introduction of a new multimodal
approach, based on the combination of audio and multi-touch gesture interaction,
contributing to more accurate point estimation and graph reproduction tasks, improving
the accessibility of tablet and smartphone user interfaces