14 research outputs found

    Investigating Proactive Search Support in Conversations

    Get PDF
    Conversations among people involve solving disputes, building common ground, and reinforce mutual beliefs and assumptions. Conversations often require external information that can support these human activities. In this paper, we study how a spoken conversation can be supported by a proactive search agent that listens to the conversation, detects entities mentioned in the conversation, and proactively retrieves and presents information related to the conversation. A total of 24 participants (12 pairs) were involved in informal conversations, using either the proactive search agent or a control condition that did not support conversational analysis or proactive information retrieval. Data comprising transcripts, interaction logs, questionnaires, and interviews indicated that the proactive search agent effectively augmented the conversations, affected the conversations' topical structure, and reduced the need for explicit search activity. The findings also revealed key challenges in the design of proactive search systems that assist people in natural conversations.Peer reviewe

    Interaction techniques for wall-sized screens

    Get PDF
    Large screen displays are part of many future visions, such as i-LAND that describes the possible workspace of the future. Research showed that wall-sized screens provide clear benefits for data exploration, collaboration and organizing work in office environments. With the increase of computational power and falling display prices wall-sized screens currently make the step out of research labs and specific settings into office environments and private life. Today, there is no standard set of interaction techniques for interacting with wall-sized displays and it is even unclear if a single mode of input is suitable for all potential applications. In this workshop, we will bring together researchers from academia and industry who work on large screens. Together, we will survey current research directions, review promising interaction techniques, and identify the underlying fundamental research challenges

    Explorable Information Spaces - Designing Entity Affordances for Fluid Information Exploration

    No full text
    The research presented in this dissertation explores interactive techniques designed to overcome limitations of current conventional search tools as primary access points to information, and better support for a wider range of information-seeking behaviors, including exploratory search, serendipity, and orientation. I summarize that goal as Making information explorable and define explorability as the quality of physical space that enables humans to become acquainted with it through movement and exploration. An explorable information space implies situated information, which enables orientation: Choosing a direction instead of formulating queries; Meaningful overviews instead of narrow looks; Persistent spaces allowing growing familiarity, sense-making, and collaboration, instead of quick disposable search sessions. The promise of an information space with such properties is carried by the notion of entity- oriented information. I address the state of explorability of the information space through the evaluation of entity affordances and visualization techniques. To that end, I demarcate three properties of explorability, i.e., Direction, Orientation, and Continuity, which I use as design drivers in the development of various prototypes and user experiments. The research process has yielded eight publications, including seven design cases with user experiments, and a position paper. The summarized work consists of an extensive design exploration of the topic at hand, and proposes a variety of interaction techniques that have shown to support information exploration, and together demarcate an alternative paradigm for future information practices

    Querytogether : Enabling entity-centric exploration in multi-device collaborative search

    Get PDF
    Collaborative and co-located information access is becoming increasingly common. However, fairly little attention has been devoted to the design of ubiquitous computing approaches for spontaneous exploration of large information spaces enabling co-located collaboration. We investigate whether an entity-based user interface provides a solution to support co-located search on heterogeneous devices. We present the design and implementation of QueryTogether, a multi-device collaborative search tool through which entities such as people, documents, and keywords can be used to compose queries that can be shared to a public screen or specific users with easy touch enabled interaction. We conducted mixed-methods user experiments with twenty seven participants (nine groups of three people), to compare the collaborative search with QueryTogether to a baseline adopting established search and collaboration interfaces. Results show that QueryTogether led to more balanced contribution and search engagement. While the overall s-recall in search was similar, in the QueryTogether condition participants found most of the relevant results earlier in the tasks, and for more than half of the queries avoided text entry by manipulating recommended entities. The video analysis demonstrated a more consistent common ground through increased attention to the common screen, and more transitions between collaboration styles. Therefore, this provided a better fit for the spontaneity of ubiquitous scenarios. QueryTogether and the corresponding study demonstrate the importance of entity based interfaces to improve collaboration by facilitating balanced participation, flexibility of collaboration styles and social processing of search entities across conversation and devices. The findings promote a vision of collaborative search support in spontaneous and ubiquitous multi-device settings, and better linking of conversation objects to searchable entities.Peer reviewe

    Entity Recommendation for Everyday Digital Tasks

    Get PDF
    Recommender systems can support everyday digital tasks by retrieving and recommending useful information contextually. This is becoming increasingly relevant in services and operating systems. Previous research often focuses on specific recommendation tasks with data captured from interactions with an individual application. The quality of recommendations is also often evaluated addressing only computational measures of accuracy, without investigating the usefulness of recommendations in realistic tasks. The aim of this work is to synthesize the research in this area through a novel approach by (1) demonstrating comprehensive digital activity monitoring, (2) introducing entity-based computing and interaction, and (3) investigating the previously overlooked usefulness of entity recommendations and their actual impact on user behavior in real tasks. The methodology exploits context from screen frames recorded every 2 seconds to recommend information entities related to the current task. We embodied this methodology in an interactive system and investigated the relevance and influence of the recommended entities in a study with participants resuming their realworld tasks after a 14-day monitoring phase. Results show that the recommendations allowed participants to find more relevant entities than in a control without the system. In addition, the recommended entities were also used in the actual tasks. In the discussion, we reflect on a research agenda for entity recommendation in context, revisiting comprehensive monitoring to include the physical world, considering entities as actionable recommendations, capturing drifting intent and routines, and considering explainability and transparency of recommendations, ethics, and ownership of data.Peer reviewe

    EntityBot

    No full text
    | openaire: EC/H2020/826266/EU//CO-ADAPTEveryday digital tasks can highly benefit from systems that recommend the right information to use at the right time. However, existing solutions typically support only specific applications and tasks. In this demo, we showcase EntityBot, a system that captures context across application boundaries and recommends information entities related to the current task. The user's digital activity is continuously monitored by capturing all content on the computer screen using optical character recognition. This includes all applications and services being used and specific to individuals' computer usages such as instant messaging, emailing, web browsing, and word processing. A linear model is then applied to detect the user's task context to retrieve entities such as applications, documents, contact information, and several keywords determining the task. The system has been evaluated with real-world tasks, demonstrating that the recommendation had an impact on the tasks and led to high user satisfaction.Peer reviewe
    corecore