30 research outputs found
Examining the effect of task stage and topic knowledge on searcher interaction with a “digital bookstore”
This paper reports some results from the experiment of the 2010 INEX interactive track. The experiment was designed to let searchers simulate being at two distinct stages of a work task process. Data were also collected on the test participants' topic knowledge. We have performed statistical analysis of the collected data to study differences with respect to relevance judgments and use of different types of metadata, at the different stages and for users with high and low topic knowledge
OUC's Participation in the 2011 INEX Book Track
In this article we describe the Oslo University College’s participation in the INEX 2011 Book track. In 2010, the OUC submitted retrieval results for the “Prove It” task with traditional relevance detection combined with some rudimental detection of confirmation. In line with our belief that proving or refuting facts are different semantic aware actions of speech, we have this year attempted to incorporate some rudimentary semantic support based on the WordNet database
Search Transition as a Measure of Effort in Information Retrieval Interaction
In this article we introduce the concept of search transitions
as a unit for measuring the effort invested by searchers
in
information retrieval interaction.
The concept is discussed
and compared to traditional measures of effort, such as
time. To investigate the usability of the search transition
measure we have performed an analysis of 149
logs in an
IR system indexing a collection of 650.000 Wikipedia
articles. Our findings show that search transitions correlate
with other, more mechanistic, effort measures. Additional
experiments are necessary to investigate if it is a better
measure of effort than e.g. number of documents
examined
Using ‘search transitions’ to study searchers investment of effort: experiences with client and server side logging
We are investigating the value of using the concept ‘search transition’ for studying effort invested in information search processes. In this paper we present findings from a comparative study of data collected from client and server side loggings. The purpose is to see what factors of effort can be captured from the two logging methods. The data stems from studies of searchers interaction with an XML information retrieval system. The searchers interaction was simultaneously logged by a screen capturing software and the IR systems logging facility. In order to identify the advantages and disadvantages we have compared the data gathered from a selection of sessions. We believe there is value in identifying the effort investment in a search process, both to evaluate the quality of the search system and to suggest areas of system intervention in the search process, if effort investment can be detected dynamicall
Seven years of INEX interactive retrieval experiments – lessons and challenges
This paper summarizes a major effort in interactive search investigation,
the INEX i-track, a collective effort run over a seven-year period. We present
the experimental conditions, report some of the findings of the participating
groups, and examine the challenges posed by this kind of collective experimental
effort
Overview of the INEX 2008 Interactive Track
This paper presents the organization of the INEX 2008 interactive track. In this year’s iTrack we aimed at exploring the value of element retrieval for two different task types, fact-finding and research tasks. Two research groups collected data from 29 test persons, each performing two tasks. We describe the methods used for data collection and the tasks performed by the participants. A general result indicates that test persons were more satisfied when completing research task compared to fact-finding task. In our experiment, test persons regarded the research task easier, were more satisfied with the search results and found more relevant information for the research tasks
The INEX 2010 Interactive Track: An Overview
In the paper we present the organization of the INEX 2010 interactive track. For the 2010 experiments the iTrack has gathered data on user search behavior in a collection consisting of book metadata taken from the online bookstore Amazon and the social cataloguing application LibraryThing. The collected data represents traditional bibliographic metadata, user-generated tags and reviews and promotional texts and reviews from publishers and professional reviewers. In this year’s experiments we designed two search task categories, which were set to represent two different stages of work task processes. In addition we let the users create a task of their own, which is used as a control task. In the paper we describe the methods used for data collection and the tasks performed by the participants
A Standardised Format for Exchanging User Study Instruments
Increasing re-use in Interactive Information Retrieval (IIR) has been an ongoing aim in IIR for a significant amount of time, however progress has been limited and patchy. While re-use of some study aspects can be difficult due to the varied nature of IIR studies, the use of pre- and post-task self-reported measures is widespread and relatively standardised. Nevertheless, re-use of elements in this area is also limited, in part because systems used to implement them are not able to exchange question, instruments, or complete study setups. To address this, this paper presents a standardised, but extendable, format for IIR survey instrument exchange
Brev til redaktionen - På jakt etter virkeligheten
Tilsvar til Eckhart Kühlhor
Seven years of INEX interactive retrieval experiments – lessons and challenges
This paper summarizes a major effort in interactive search investigation,
the INEX i-track, a collective effort run over a seven-year period. We present
the experimental conditions, report some of the findings of the participating
groups, and examine the challenges posed by this kind of collective experimental
effort