7,159 research outputs found
An Efficient Approximate kNN Graph Method for Diffusion on Image Retrieval
The application of the diffusion in many computer vision and artificial
intelligence projects has been shown to give excellent improvements in
performance. One of the main bottlenecks of this technique is the quadratic
growth of the kNN graph size due to the high-quantity of new connections
between nodes in the graph, resulting in long computation times. Several
strategies have been proposed to address this, but none are effective and
efficient. Our novel technique, based on LSH projections, obtains the same
performance as the exact kNN graph after diffusion, but in less time
(approximately 18 times faster on a dataset of a hundred thousand images). The
proposed method was validated and compared with other state-of-the-art on
several public image datasets, including Oxford5k, Paris6k, and Oxford105k
Context guided retrieval
This paper presents a hierarchical case representation that uses a context guided retrieval method The performance of this method is compared to that of a simple flat file representation using standard nearest neighbour retrieval. The data presented in this paper is more extensive than that presented in an earlier paper by the same authors. The estimation of the construction costs of light industrial warehouse buildings is used as the test domain. Each case in the system comprises approximately 400 features. These are structured into a hierarchical case representation that holds more general contextual features at its top and specific building elements at its leaves. A modified nearest neighbour retrieval algorithm is used that is guided by contextual similarity. Problems are decomposed into sub-problems and solutions recomposed into a final solution. The comparative results show that the context guided retrieval method using the hierarchical case representation is significantly more accurate than the simpler flat file representation and standard nearest neighbour retrieval
COORDINATING COLLECTIVE RESISTANCE THROUGH COMMUNICATION AND REPEATED INTERACTION
This paper presents a laboratory collective resistance (CR) game to study how different forms of repeated interactions, with and without communication, can help coordinate subordinates' collective resistance to a ???divide-and-conquer??? transgression against their personal interests. In the one-shot CR game, a first???mover (the ???leader???) decides whether to transgress against two responders. Successful transgression increases the payoff of the leader at the expense of the victim(s) of transgression. The two responders then simultaneously decide whether to challenge the leader. The subordinates face a coordination problem in that their challenge against the leader's transgression will only succeed if both of them incur the cost to do so. The outcome without transgression can occur in equilibrium with standard money-maximizing preferences with repeated interactions, but this outcome is not an equilibrium with standard preferences when adding non-binding subordinate ???cheap talk??? communication in the one-shot game. Nevertheless, we find that communication (in the one-shot game) is at least as effective as repetition (with no communication) in reducing the transgression rate. Moreover, communication is better than repetition in coordinating resistance, because it makes it easier for subordinates to identify others who have social preferences and are willing to incur the cost to punish a violation of social norms.Communication, Cheap Talk, Collective Resistance, Divide-and-Conquer, Laboratory Experiment, Repeated Games, Social Preferences
Division of labour and sharing of knowledge for synchronous collaborative information retrieval
Synchronous collaborative information retrieval (SCIR) is concerned with supporting two or more users who search together at the same time in order to satisfy a shared information need. SCIR systems represent a paradigmatic shift in the way we view information retrieval, moving from an individual to a group process and as such the development of novel IR techniques is needed to support this. In this article we present what we believe are two key concepts for the development of effective SCIR namely division of labour (DoL) and sharing of knowledge (SoK). Together these concepts enable coordinated SCIR such that redundancy across group members is reduced whilst enabling each group member to benefit from the discoveries of their collaborators. In this article we outline techniques from state-of-the-art SCIR systems which support these two concepts, primarily through the provision of awareness widgets. We then outline some of our own work into system-mediated techniques for division of labour and sharing of knowledge in SCIR. Finally we conclude with a discussion on some possible future trends for these two coordination techniques
Unlock Multi-Modal Capability of Dense Retrieval via Visual Module Plugin
This paper proposes Multi-modAl Retrieval model via Visual modulE pLugin
(MARVEL) to learn an embedding space for queries and multi-modal documents to
conduct retrieval. MARVEL encodes queries and multi-modal documents with a
unified encoder model, which helps to alleviate the modality gap between images
and texts. Specifically, we enable the image understanding ability of a
well-trained dense retriever, T5-ANCE, by incorporating the image features
encoded by the visual module as its inputs. To facilitate the multi-modal
retrieval tasks, we build the ClueWeb22-MM dataset based on the ClueWeb22
dataset, which regards anchor texts as queries, and exact the related texts and
image documents from anchor linked web pages. Our experiments show that MARVEL
significantly outperforms the state-of-the-art methods on the multi-modal
retrieval dataset WebQA and ClueWeb22-MM. Our further analyses show that the
visual module plugin method is tailored to enable the image understanding
ability for an existing dense retrieval model. Besides, we also show that the
language model has the ability to extract image semantics from image encoders
and adapt the image features in the input space of language models. All codes
are available at https://github.com/OpenMatch/MARVEL
- …