4,101 research outputs found

    Combining relevance information in a synchronous collaborative information retrieval environment

    Get PDF
    Traditionally information retrieval (IR) research has focussed on a single user interaction modality, where a user searches to satisfy an information need. Recent advances in both web technologies, such as the sociable web of Web 2.0, and computer hardware, such as tabletop interface devices, have enabled multiple users to collaborate on many computer-related tasks. Due to these advances there is an increasing need to support two or more users searching together at the same time, in order to satisfy a shared information need, which we refer to as Synchronous Collaborative Information Retrieval. Synchronous Collaborative Information Retrieval (SCIR) represents a significant paradigmatic shift from traditional IR systems. In order to support an effective SCIR search, new techniques are required to coordinate users' activities. In this chapter we explore the effectiveness of a sharing of knowledge policy on a collaborating group. Sharing of knowledge refers to the process of passing relevance information across users, if one user finds items of relevance to the search task then the group should benefit in the form of improved ranked lists returned to each searcher. In order to evaluate the proposed techniques we simulate two users searching together through an incremental feedback system. The simulation assumes that users decide on an initial query with which to begin the collaborative search and proceed through the search by providing relevance judgments to the system and receiving a new ranked list. In order to populate these simulations we extract data from the interaction logs of various experimental IR systems from previous Text REtrieval Conference (TREC) workshops

    MultiMWE: building a multi-lingual multi-word expression (MWE) parallel corpora

    Get PDF
    Multi-word expressions (MWEs) are a hot topic in research in natural language processing (NLP), including topics such as MWE detection, MWE decomposition, and research investigating the exploitation of MWEs in other NLP fields such as Machine Translation. However, the availability of bilingual or multi-lingual MWE corpora is very limited. The only bilingual MWE corpora that we are aware of is from the PARSEME (PARSing and Multi-word Expressions) EU project. This is a small collection of only 871 pairs of English-German MWEs. In this paper, we present multi-lingual and bilingual MWE corpora that we have extracted from root parallel corpora. Our collections are 3,159,226 and 143,042 bilingual MWE pairs for German-English and Chinese-English respectively after filtering. We examine the quality of these extracted bilingual MWEs in MT experiments. Our initial experiments applying MWEs in MT show improved translation performances on MWE terms in qualitative analysis and better general evaluation scores in quantitative analysis, on both German-English and Chinese-English language pairs. We follow a standard experimental pipeline to create our MultiMWE corpora which are available online. Researchers can use this free corpus for their own models or use them in a knowledge base as model features

    Investigating Biometric Response for Information Retrieval Applications

    Get PDF
    Current information retrieval systems make no measurement of the user’s response to the searching process or the information itself. Existing psychological studies show that subjects exhibit measurable physiological responses when carrying out certain tasks, e.g. when viewing images, which generally result in heightened emotional states. We find that users exhibit measurable biometric behaviour in the form of galvanic skin response when watching movies, and engaging in interactive tasks. We examine how this data might be exploited in the indexing of data for search and within the search process itself

    Combination of content analysis and context features for digital photograph retrieval.

    Get PDF
    In recent years digital cameras have seen an enormous rise in popularity, leading to a huge increase in the quantity of digital photos being taken. This brings with it the challenge of organising these large collections. The MediAssist project uses date/time and GPS location for the organisation of personal collections. However, this context information is not always sufficient to support retrieval when faced with a large, shared, archive made up of photos from a number of users. We present work in this paper which retrieves photos of known objects (buildings, monuments) using both location information and content-based retrieval tools from the AceToolbox. We show that for this retrieval scenario, where a user is searching for photos of a known building or monument in a large shared collection, content-based techniques can offer a significant improvement over ranking based on context (specifically location) alone

    A query description model based on basic semantic unit composite Petri-Net for soccer video

    Get PDF
    Digital video networks are making available increasing amounts of sports video data. The volume of material on offer means that sports fans often rely on prepared summaries of game highlights to follow the progress of their favourite teams. A significant application area for automated video analysis technology is the generation of personalized highlights of sports events. One of the most popular sports around world is soccer. A soccer game is composed of a range of significant events, such as goal scoring, fouls, and substitutions. Automatically detecting these events in a soccer video can enable users to interactively design their own highlights programmes. From an analysis of broadcast soccer video, we propose a query description model based on Basic Semantic Unit Composite Petri-Nets (BSUCPN) to automatically detect significant events within soccer video. Firstly we define a Basic Semantic Unit (BSU) set for soccer videos based on identifiable feature elements within a soccer video, Secondly we design Composite Petri-Net (CPN) models for semantic queries and use these to describe BSUCPNs for semantic events in soccer videos. A particular strength of this approach is that users are able to design their own semantic event queries based on BSUCPNs to search interactively within soccer videos. Experimental results based on recorded soccer broadcasts are used to illustrate the potential of this approach

    A semantic content analysis model for sports video based on perception concepts and finite state machines

    Get PDF
    In automatic video content analysis domain, the key challenges are how to recognize important objects and how to model the spatiotemporal relationships between them. In this paper we propose a semantic content analysis model based on Perception Concepts (PCs) and Finite State Machines (FSMs) to automatically describe and detect significant semantic content within sports video. PCs are defined to represent important semantic patterns for sports videos based on identifiable feature elements. PC-FSM models are designed to describe spatiotemporal relationships between PCs. And graph matching method is used to detect high-level semantic automatically. A particular strength of this approach is that users are able to design their own highlights and transfer the detection problem into a graph matching problem. Experimental results are used to illustrate the potential of this approac

    Guidelines for the presentation and visualisation of lifelog content

    Get PDF
    Lifelogs offer rich voluminous sources of personal and social data for which visualisation is ideally suited to providing access, overview, and navigation. We explore through examples of our visualisation work within the domain of lifelogging the major axes on which lifelogs operate, and therefore, on which their visualisations should be contingent. We also explore the concept of ‘events’ as a way to significantly reduce the complexity of the lifelog for presentation and make it more human-oriented. Finally we present some guidelines and goals which should be considered when designing presentation modes for lifelog conten

    Mobile, ubiquitous information seeking, as a group: the iBingo collaborative video retrieval system

    Get PDF
    iBingo features two or more users performing collaborative information seeking tasks, using mobile devices, Apple iPod iTouch in our case. The novelty in our work is that the system, called iBingo, mediates the collaborative searches among the users and performs a realtime division of labour among co-searchers so users are presented with documents which are both unique and tailored to the individual. This enables each user to explore unique subsets of the retrieved information space. We demonstrate iBingo mobile collabo-rative search on a video collection from TRECVid 2007

    Visualising Bluetooth interactions: combining the Arc Diagram and DocuBurst techniques

    Get PDF
    Within the Bluetooth mobile space, overwhelmingly large sets of interaction and encounter data can very quickly be accumulated. This presents a challenge to gaining an understanding and overview of the dataset as a whole. In order to overcome this problem, we have designed a visualisation which provides an informative overview of the dataset. The visualisation combines existing Arc Diagram and DocuBurst techniques into a radial space-filling layout capable of conveying a rich understanding of Bluetooth interaction data, and clearly represents social networks and relationships established among encountered devices. The end result enables a user to visually interpret the relative importance of individual devices encountered, the relationships established between them and the usage of Bluetooth 'friendly names' (or device labels) within the data
    corecore