1 research outputs found

    Exploring gaze-adaptive features for interacting with multi-document visualizations

    No full text
    We present a preliminary design study for utilizing eye tracking to support interacting with a multi-document visualization. Complex information seeking tasks can involve collection and comparison of multiple documents, resulting in long and sustained search sessions. The sustained and evolving nature of the session also provides the search interface the opportunity to gather information on the user state and interaction history, which can be used to adapt the information content and representation. We designed a system to evaluate how eye tracking information can be used for adapting the visual salience of information entities. The interface features documents and related keywords that are arranged in a radial layout configuration called the intent radar. Reading history and visual attention, as registered by eye tracking data, are respectively used to trace read items and for visual cueing. We evaluated the interface with 16 participants to gather subjective feedback about specific components and features of the interface. The overall results show that the interface and gaze-related appearance of keywords was positively received by the users
    corecore