17,007 research outputs found
LEARNING WORD RELATEDNESS OVER TIME FOR TEMPORAL RANKING
Queries and ranking with temporal aspects gain significant attention in field of Information Retrieval. While searching for articles published over time, the relevant documents usually occur in certain temporal patterns. Given a query that is implicitly time sensitive, we develop a temporal ranking using the important times of query by drawing from the distribution of query trend relatedness over time. We also combine the model with Dual Embedding Space Model (DESM) in the temporal model according to document timestamp. We apply our model using three temporal word embeddings algorithms to learn relatedness of words from news archive in Bahasa Indonesia: (1) QT-W2V-Rank using Word2Vec (2) QT-OW2V-Rank using OrthoTrans-Word2Vec (3) QT-DBE-Rank using Dynamic Bernoulli Embeddings. The highest score was achieved with static word embeddings learned separately over time, called QT-W2V-Rank, which is 66% in average precision and 68% in early precision. Furthermore, studies of different characteristics of temporal topics showed that QT-W2V-Rank is also more effective in capturing temporal patterns such as spikes, periodicity, and seasonality than the baselines
Retrieving Multi-Entity Associations: An Evaluation of Combination Modes for Word Embeddings
Word embeddings have gained significant attention as learnable
representations of semantic relations between words, and have been shown to
improve upon the results of traditional word representations. However, little
effort has been devoted to using embeddings for the retrieval of entity
associations beyond pairwise relations. In this paper, we use popular embedding
methods to train vector representations of an entity-annotated news corpus, and
evaluate their performance for the task of predicting entity participation in
news events versus a traditional word cooccurrence network as a baseline. To
support queries for events with multiple participating entities, we test a
number of combination modes for the embedding vectors. While we find that even
the best combination modes for word embeddings do not quite reach the
performance of the full cooccurrence network, especially for rare entities, we
observe that different embedding methods model different types of relations,
thereby indicating the potential for ensemble methods.Comment: 4 pages; Accepted at SIGIR'1
- …