5 research outputs found

    Comparing meeting browsers using a task-based evaluation method

    Get PDF
    Information access within meeting recordings, potentially transcribed and augmented with other media, is facilitated by the use of meeting browsers. To evaluate their performance through a shared benchmark task, users are asked to discriminate between true and false parallel statements about facts in meetings, using different browsers. This paper offers a review of the results obtained so far with five types of meeting browsers, using similar sets of statements over the same meeting recordings. The results indicate that state-of-the-art speed for true/false question answering is 1.5-2 minutes per question, and precision is 70%-80% (vs. 50% random guess). The use of ASR compared to manual transcripts, or the use of audio signals only, lead to a perceptible though not dramatic decrease in performance scores

    Finding Information in Multimedia Records of Meetings

    Get PDF
    This paper overviews the work carried out within two large consortia on improving the access to records of human meetings using multimodal interfaces. The design of meeting browsers has emerged as an important goal, with both theoretical interest and practical applications. Meeting browsers are assistance tools that help humans navigate through multimedia records of meetings (audio, video, documents, and metadata), in order to obtain a general idea about what happened in a meeting or to find specific pieces of information, for discovery or verification. To explain the importance that meeting browsers have gained in time, the paper summarizes findings of user studies, discusses features of meeting browser prototypes, and outlines the main evaluation protocol proposed. Reference scores are provided for future benchmarking. These achievements in meeting browsing constitute an iterative software process, from user studies to prototypes and then to products

    Modeling Users' Information Needs in a Document Recommender for Meetings

    Get PDF
    People are surrounded by an unprecedented wealth of information. Access to it depends on the availability of suitable search engines, but even when these are available, people often do not initiate a search, because their current activity does not allow them, or they are not aware of the existence of this information. Just-in-time retrieval brings a radical change to the process of query-based retrieval, by proactively retrieving documents relevant to users' current activities, in an easily accessible and non-intrusive manner. This thesis presents a novel set of methods intended to improve the relevance of a just-in-time retrieval system, specifically a document recommender system designed for conversations, in terms of precision and diversity of results. Additionally, we designed an evaluation protocol to compare the proposed methods in the thesis with other ones using crowdsourcing. In contrast to previous systems, which model users' information needs by extracting keywords from clean and well-structured texts, this system models them from the conversation transcripts, which contain noise from automatic speech recognition (ASR) and have a free structure, often switching between several topics. To deal with these issues, we first propose a novel keyword extraction method which preserves both the relevance and the diversity of topics of the conversation, to properly capture possible users' needs with minimum ASR noise. Implicit queries are then built from these keywords. However, the presence of multiple unrelated topics in one query introduces significant noise into the retrieval results. To reduce this effect, we separate users' needs by topically clustering keyword sets into several subsets or implicit queries. We introduce a merging method which combines the results of multiple queries which are prepared from users' conversation to generate a concise, diverse and relevant list of documents. This method ensures that the system does not distract its users from their current conversation by frequently recommending them a large number of documents. Moreover, we address the problem of explicit queries that may be asked by users during a conversation. We introduce a query refinement method which leverages the conversation context to answer the users' information needs without asking for additional clarifications and therefore, again, avoiding to distract users during their conversation. Finally, we implemented the end-to-end document recommender system by integrating the ideas proposed in this thesis and then proposed an evaluation scenario with human users in a brainstorming meeting

    Experimental Comparison of Multimodal Meeting Browsers

    No full text
    Abstract. This paper describes an experimental comparison of three variants of a meeting browser. This browser incorporates innovative, multimodal technologies to enable storage and smart retrieval of captured meeting. Over a hundred subjects had to work in a design team in which they had to prepare and carry out a final meeting, while supported by one of the browser variants. In one condition, teams worked without such support. Measures on individual characteristics, the team, the process and outcome of the project, and the usability of the browsers were taken. The results indicate that a multimodal meeting browser can indeed improve meetings. Further analysis of the now available data will provide additional insight into how browsers can contribute to more efficient and satisfactory meetings, improved team performance and higher quality project outcomes.
    corecore