4 research outputs found

    "Spindex" (speech index) enhances menu navigation user experience of touch screen devices in various input gestures: tapping, wheeling, and flicking

    Get PDF
    In a large number of electronic devices, users interact with the system by navigating through various menus. Auditory menus can complement or even replace visual menus, so research on auditory menus has recently increased with mobile devices as well as desktop computers. Despite the potential importance of auditory displays on touch screen devices, little research has been attempted to enhance the effectiveness of auditory menus for those devices. In the present study, I investigated how advanced auditory cues enhance auditory menu navigation on a touch screen smartphone, especially for new input gestures such as tapping, wheeling, and flicking methods for navigating a one-dimensional menu. Moreover, I examined if advanced auditory cues improve user experience, not only for visuals-off situations, but also for visuals-on contexts. To this end, I used a novel auditory menu enhancement called a "spindex" (i.e., speech index), in which brief audio cues inform the users of where they are in a long menu. In this study, each item in a menu was preceded by a sound based on the item's initial letter. One hundred and twenty two undergraduates navigated through an alphabetized list of 150 song titles. The study was a split-plot design with manipulated auditory cue type (text-to-speech (TTS) alone vs. TTS plus spindex), visual mode (on vs. off), and input gesture style (tapping, wheeling, and flicking). Target search time and subjective workload for the TTS + spindex were lower than those of the TTS alone in all input gesture types regardless of visual type. Also, on subjective ratings scales, participants rated the TTS + spindex condition higher than the plain TTS on being 'effective' and 'functionally helpful'. The interaction between input methods and output modes (i.e., auditory cue types) and its effects on navigation behaviors was also analyzed based on the two-stage navigation strategy model used in auditory menus. Results were discussed in analogy with visual search theory and in terms of practical applications of spindex cues.M.S.Committee Chair: Bruce N. Walker; Committee Member: Frank Durso; Committee Member: Gregory M. Cors

    Making Spatial Information Accessible on Touchscreens for Users who are Blind and Visually Impaired

    Get PDF
    Touchscreens have become a de facto standard of input for mobile devices as they most optimally use the limited input and output space that is imposed by their form factor. In recent years, people who are blind and visually impaired have been increasing their usage of smartphones and touchscreens. Although basic access is available, there are still many accessibility issues left to deal with in order to bring full inclusion to this population. One of the important challenges lies in accessing and creating of spatial information on touchscreens. The work presented here provides three new techniques, using three different modalities, for accessing spatial information on touchscreens. The first system makes geometry and diagram creation accessible on a touchscreen through the use of text-to-speech and gestural input. This first study is informed by a qualitative study of how people who are blind and visually impaired currently access and create graphs and diagrams. The second system makes directions through maps accessible using multiple vibration sensors without any sound or visual output. The third system investigates the use of binaural sound on a touchscreen to make various types of applications accessible such as physics simulations, astronomy, and video games

    The effect of experience on the use of multimodal displays in a multitasking interaction

    Get PDF
    Theories and previous work suggest that performance while multitasking can benefit from the use of displays that employ multiple modalities. Studies often show benefits of these multimodal displays but not to the extent that theories of multimodal task-sharing might suggest. However, it is often the case that the studies investigating this effect give users at least one type of display that they are not accustomed to, often an auditory display, and compare their performance on these novel displays to a visual display, with which most people are familiar. This leaves a question open regarding the effects of longer-term experience with these multimodal displays. The current study investigated the effect of practice with multimodal displays, comparing two multimodal displays to a standard visuals-only display. Over the course of four sessions, participants practiced a list-searching secondary task on one of three display types (two auditory plus visual displays, and one visual-only display) while performing a visual-manual task. Measures of search-task and primary task performance along with workload, visual behaviors, and perceived performance were collected. Results of the study support previous work with regard to more visual time on the primary task for those using multimodal displays, and show that perceived helpfulness increased over time for those using the multimodal displays. However, the results also point to practice effects taking place almost equally across the conditions, which suggest that initial task-sharing behaviors seen with well-designed multimodal displays may not benefit as much from practice as hypothesized, or may require additional time to take hold. The results of the research are discussed regarding their use in research and applying multimodal displays in the real world as well as in how these results fit with theories of multimodal task-sharing.Ph.D

    The Effect of Enhanced Navigational Affordances on College Students' Comprehension of Informational Auditory Text, and the Role of Metacognitive and Motivational Factors

    Get PDF
    A proliferation of natural speech audio texts as well as improvements in synthetic text-to-speech technology have created new opportunities for learners. While many studies have examined factors affecting comprehension of print texts, few have examined factors affecting comprehension of audio texts and fewer still the effects of specific moderating variables. This study examines the effects of navigational affordance use on comprehension of informational audio texts. Factors of metacomprehension, including self-regulation and rehearsal, as well as motivational factors of interest, effort regulation, and test anxiety were studied for their relationship to the use of navigational affordances. The study utilized a mobile application distributed through the iTunes® store to administer the experimental procedure. Students enrolled in an introductory political science course at a large state university were solicited to participate. Participants were randomly assigned to either the experimental or control group. The experimental group (N = 74) had access to enhanced navigational affordances including pause and continue, forward by sentence, forward by paragraph, backward by sentence, and backward by paragraph. The control group (N = 11) only had access to pause and continue affordances. Results indicate that the presence of enhanced navigational affordances did not demonstrate a significant difference in comprehension between the experimental and control groups. However, there was a significant correlation between navigational affordance use and comprehension. The data indicate the relationship may be curvilinear meaning that affordance use is more frequent for learners with average comprehension, and less frequent for high and low comprehension learners. Metacomprehension and motivational factors were not significantly correlated with navigational affordance use. Motivational factors did positively correlate with comprehension for both groups with an F = 5.49 and α = 0.002. Beta weights for the three factors were 0.29 for interest, -0.35 for test anxiety, and 0.003 for motivation. Information on distractions during the study were also collected. Some participants demonstrated a pattern of skipping behavior when using navigational affordances in which they would quickly navigate through the audio text. The study platform could be used to administer other kinds of audio text comprehension experiments
    corecore