3,943 research outputs found
Analyzing the Information Search Behavior and Intentions in Visual Information Systems
Visual information search systems support different search approaches such as targeted, exploratory or analytical search. Those visual systems deal with the challenge of composing optimal initial result visualization sets that face the search intention and respond to the search behavior of users. The diversity of these kinds of search tasks require different sets of visual layouts and functionalities, e.g. to filter, thrill-down or even analyze concrete data properties. This paper describes a new approach to calculate the probability towards the three mentioned search intentions, derived from users’ behavior. The implementation is realized as a web-service, which is included in a visual environment that is designed to enable various search strategies based on heterogeneous data sources. In fact, based on an entered search query our developed search intention analysis web-service calculates the most probable search task, and our visualization system initially shows the optimal result set of visualizations to solve the task. The main contribution of this paper is a probability-based approach to derive the users’ search intentions based on the search behavior enhanced by the application to a visual system
09101 Abstracts Collection -- Interactive Information Retrieval
From 01.03. to 06.03.2009, the Dagstuhl Seminar 09101 ``Interactive Information Retrieval \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Inferring Intent from Interaction with Visualization
Today\u27s state-of-the-art analysis tools combine the human visual system and domain knowledge, with the machine\u27s computational power. The human performs the reasoning, deduction, hypothesis generation, and judgment. The entire burden of learning from the data usually rests squarely on the human user\u27s shoulders. This model, while successful in simple scenarios, is neither scalable nor generalizable. In this thesis, we propose a system that integrates advancements from artificial intelligence within a visualization system to detect the user\u27s goals. At a high level, we use hidden unobservable states to represent goals/intentions of users. We automatically infer these goals from passive observations of the user\u27s actions (e.g., mouse clicks), thereby allowing accurate predictions of future clicks. We evaluate this technique with a crime map and demonstrate that, depending on the type of task, users\u27 clicks appear in our prediction set 79\% -- 97\% of the time. Further analysis shows that we can achieve high prediction accuracy after only a short period (typically after three clicks). Altogether, we show that passive observations of interaction data can reveal valuable information about users\u27 high-level goals, laying the foundation for next-generation visual analytics systems that can automatically learn users\u27 intentions and support the analysis process proactively
Visualizing and Interacting with Concept Hierarchies
Concept Hierarchies and Formal Concept Analysis are theoretically well
grounded and largely experimented methods. They rely on line diagrams called
Galois lattices for visualizing and analysing object-attribute sets. Galois
lattices are visually seducing and conceptually rich for experts. However they
present important drawbacks due to their concept oriented overall structure:
analysing what they show is difficult for non experts, navigation is
cumbersome, interaction is poor, and scalability is a deep bottleneck for
visual interpretation even for experts. In this paper we introduce semantic
probes as a means to overcome many of these problems and extend usability and
application possibilities of traditional FCA visualization methods. Semantic
probes are visual user centred objects which extract and organize reduced
Galois sub-hierarchies. They are simpler, clearer, and they provide a better
navigation support through a rich set of interaction possibilities. Since probe
driven sub-hierarchies are limited to users focus, scalability is under control
and interpretation is facilitated. After some successful experiments, several
applications are being developed with the remaining problem of finding a
compromise between simplicity and conceptual expressivity
An Open Learner Model Dashboard for Adaptive Learning
The thesis describes the design process of the independent OLM dashboard, MittFagkart, that visualizes student activity data across digital math tools used in Norwegian classrooms for teachers.Masteroppgave i informasjonsvitenskapINFO390MASV-INF
ARShopping: In-Store Shopping Decision Support Through Augmented Reality and Immersive Visualization
Online shopping gives customers boundless options to choose from, backed by
extensive product details and customer reviews, all from the comfort of home;
yet, no amount of detailed, online information can outweigh the instant
gratification and hands-on understanding of a product that is provided by
physical stores. However, making purchasing decisions in physical stores can be
challenging due to a large number of similar alternatives and limited
accessibility of the relevant product information (e.g., features, ratings, and
reviews). In this work, we present ARShopping: a web-based prototype to
visually communicate detailed product information from an online setting on
portable smart devices (e.g., phones, tablets, glasses), within the physical
space at the point of purchase. This prototype uses augmented reality (AR) to
identify products and display detailed information to help consumers make
purchasing decisions that fulfill their needs while decreasing the
decision-making time. In particular, we use a data fusion algorithm to improve
the precision of the product detection; we then integrate AR visualizations
into the scene to facilitate comparisons across multiple products and features.
We designed our prototype based on interviews with 14 participants to better
understand the utility and ease of use of the prototype.Comment: VIS 2022 Short Paper; 5 page
- …