62 research outputs found

    Oral messages improve visual search

    Get PDF
    Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.Comment: 4 page

    How really effective are Multimodal Hints in enhancing Visual Target Spotting? Some evidence from a usability study

    Get PDF
    The main aim of the work presented here is to contribute to computer science advances in the multimodal usability area, in-as-much as it addresses one of the major issues relating to the generation of effective oral system messages: how to design messages which effectively help users to locate specific graphical objects in information visualisations? An experimental study was carried out to determine whether oral messages including coarse information on the locations of graphical objects on the current display may facilitate target detection tasks sufficiently for making it worth while to integrate such messages in GUIs. The display spatial layout varied in order to test the influence of visual presentation structure on the contribution of these messages to facilitating visual search on crowded displays. Finally, three levels of task difficulty were defined, based mainly on the target visual complexity and the number of distractors in the scene. The findings suggest that spatial information messages improve participants' visual search performances significantly; they are more appropriate to radial structures than to matrix, random and elleptic structures; and, they are particularly useful for performing difficult visual search tasks.Comment: 9 page

    A Comparison of Paper Sketch and Interactive Wireframe by Eye Movements Analysis, Survey, and Interview

    Get PDF
    Eye movement-based analyses have been extensively performed on graphical user interface designs, mainly on high-fidelity prototypes such as coded prototypes. However, practitioners usually initiate the development life cycle with low-fidelity prototypes, such as mock-ups or sketches. Since little or no eye movement analysis has been performed on the latter, would eye tracking transpose its benefits from high- to low-fidelity prototypes and produce different results? To bridge this gap, we performed an eye movement-based analysis that compares gaze point indexes, gaze event types and durations, fixation, and saccade indexes produced by N=8N{=}8 participants between two treatments, a paper prototype vs. a wireframe. The paper also reports a qualitative analysis based on the answers provided by these participants in a semi-directed interview and on a perceived usability questionnaire with 14 items. Due to its interactivity, the wireframe seems to foster a more exploratory approach to design (e.g., testing and navigating more extensively) than the paper prototype

    Tactile Presentation of Network Data: Text, Matrix or Diagram?

    Full text link
    Visualisations are commonly used to understand social, biological and other kinds of networks. Currently, we do not know how to effectively present network data to people who are blind or have low-vision (BLV). We ran a controlled study with 8 BLV participants comparing four tactile representations: organic node-link diagram, grid node-link diagram, adjacency matrix and braille list. We found that the node-link representations were preferred and more effective for path following and cluster identification while the matrix and list were better for adjacency tasks. This is broadly in line with findings for the corresponding visual representations.Comment: To appear in the ACM CHI Conference on Human Factors in Computing Systems (CHI 2020

    Assistance multimodale Ă  l'exploration de visualisations 2D interactives

    No full text
    This work is about the design of a new form of Human-Computer Interaction : the speech + visual presentation combination as an output multimodality. More precisely, it deals with the valuation of the potential benefits from the speech, as a graphical expression mode, during visual target detection tasks in 2D interactive layouts.We used an experimental approach to show the influence of oral messages, including the spatial localisation of the target in the layout, on users speed and accuracy. We also valued the subjective satisfaction of the users about this assistance to visual exploration. Three experimental studies showed that multimodal presentations of the target facilitate and improve users performances, considering both speed and accuracy. They also showed that visual exploration strategies, without any oral message, depend on the spatial organisation of the information displayed.Ce travail porte sur la conception d'une nouvelle forme d'interaction Homme-Machine : la multimodalité parole+présentation visuelle en sortie du systÚme. Plus précisément, l'étude porte sur l'évaluation des apports potentiels de la parole, en tant que mode d'expression complémentaire du graphique, lors du repérage visuel de cibles au sein d'affichages 2D interactifs. Nous avons adopté une approche expérimentale pour déterminer l'influence d'indications spatiales orales sur la rapidité et la précision du repérage de cibles et évaluer la satisfaction subjective d'utilisateurs potentiels de cette forme d'assistance à l'activité d'exploration visuelle. Les différentes études réalisées ont montré d'une part que les présentations multimodales facilitent et améliorent les performances des utilisateurs pour le repérage visuel, en termes de temps et de précision de sélection des cibles. Elles ont montré d'autre part que les stratégies d'exploration visuelle des affichages, en l'absence de messages sonores, dépendent de l'organisation spatiale des informations au sein de l'affichage graphique

    ECOVAL: Ecological Validity of Cues and Representative Design in User Experience Evaluations

    Get PDF
    Egon Brunswik coined and defined the concepts of ecological validity and representative design, which are both essential to achieve external validity. However, research in HCI has inconsistently and incorrectly used Brunswik’s concept of ecological validity, which prevents the field from developing cumulative science and from generalizing the findings of user experience (UX) evaluations. In this paper, I present ECOVAL, a framework I built on Brunswik’s ideas. On the one hand, ECOVAL helps HCI researchers describe and assess the ecological validity of cues in UX evaluations. On the other hand, ECOVAL guidelines—formulated as a step-by-step procedure—help HCI researchers achieve representative design and, therefore, increase external validity. An industrial case study demonstrates the relevance of ECOVAL for achieving representative design while conducting formative UX testing. In discussing the case study, I describe how ECOVAL can help HCI researchers assess and increase the validity of UX experiments and generalize UX findings. I also illustrate the trade-offs between internal and external validities and UX resources that inevitably arise when one conducts UX experiments. From the results, I sketch avenues for future research and discuss the related challenges that future work should address

    A Comparison of Paper Sketch and Interactive Wireframe by Eye Movements Analysis, Survey, and Interview

    No full text
    Eye movement-based analyses have been extensively performed on graphical user interface designs, mainly on high-fidelity prototypes such as coded prototypes. However, practitioners usually initiate the development life cycle with low-fidelity prototypes, such as mock-ups or sketches. Since little or no eye movement analysis has been performed on the latter, would eye tracking transpose its benefits from high- to low-fidelity prototypes and produce different results? To bridge this gap, we performed an eye movement-based analysis that compares gaze point indexes, gaze event types and durations, fixation, and saccade indexes produced by N=8N{=}8 participants between two treatments, a paper prototype vs. a wireframe. The paper also reports a qualitative analysis based on the answers provided by these participants in a semi-directed interview and on a perceived usability questionnaire with 14 items. Due to its interactivity, the wireframe seems to foster a more exploratory approach to design (e.g., testing and navigating more extensively) than the paper prototype

    Do oral messages help visual exploration?

    No full text
    Colloque avec actes et comité de lecture. internationale.International audienceA preliminary experimental study is presented, which aims at eliciting the contribution of oral messages to facilitating visual search tasks in crowded visual displays. Results of quantitative and qualitative analyses suggest that appropriate verbal messages can improve both target selection time and accuracy. In particular, multimodal messages including a visual presentation of the isolated target together with absolute spatial oral information on its location in the displayed scene are most effective. || L'étude préliminaire expérimentale présentée tente de déterminer l'apport potentiel de messages oraux à l'identifification de cibles visuelles dans des affichages complexes. Les résultats des analyses quantitatives et qualitatives indiquent que des messa

    Assistance multimodale Ă  l'exploration de visualisations 2D interactives

    No full text
    Ce travail porte sur la conception d'une nouvelle forme d'interaction Homme-Machine : la multimodalité parole+présentation visuelle en sortie du systÚme. Plus précisément, l'étude porte sur l'évaluation des apports potentiels de la parole, en tant que mode d'expression complémentaire du graphique, lors du repérage visuel de cibles au sein d'affichages 2D interactifs. Nous avons adopté une approche expérimentale pour déterminer l'influence d'indications spatiales orales sur la rapidité et la précision du repérage de cibles et évaluer la satisfaction subjective d'utilisateurs potentiels de cette forme d'assistance à l'activité d'exploration visuelle. Les différentes études réalisées ont montré d'une part que les présentations multimodales facilitent et améliorent les performances des utilisateurs pour le repérage visuel, en termes de temps et de précision de sélection des cibles. Elles ont montré d'autre part que les stratégies d'exploration visuelle des affichages, en l'absence de messages sonores, dépendent de l'organisation spatiale des informations au sein de l'affichage graphique.This work is about the design of a new form of Human-Computer Interaction : the speech + visual presentation combination as an output multimodality. More precisely, it deals with the valuation of the potential benefits from the speech, as a graphical expression mode, during visual target detection tasks in 2D interactive layouts. We used an experimental approach to show the influence of oral messages, including the spatial localisation of the target in the layout, on users speed and accuracy. We also valued the subjective satisfaction of the users about this assistance to visual exploration. Three experimental studies showed that multimodal presentations of the target facilitate and improve users performances, considering both speed and accuracy. They also showed that visual exploration strategies, without any oral message, depend on the spatial organisation of the information displayed.NANCY1-SCD Sciences & Techniques (545782101) / SudocNANCY-INRIA Lorraine LORIA (545472304) / SudocSudocFranceF
    • 

    corecore