318 research outputs found
Ask the GRU: Multi-Task Learning for Deep Text Recommendations
In a variety of application domains the content to be recommended to users is
associated with text. This includes research papers, movies with associated
plot summaries, news articles, blog posts, etc. Recommendation approaches based
on latent factor models can be extended naturally to leverage text by employing
an explicit mapping from text to factors. This enables recommendations for new,
unseen content, and may generalize better, since the factors for all items are
produced by a compactly-parametrized model. Previous work has used topic models
or averages of word embeddings for this mapping. In this paper we present a
method leveraging deep recurrent neural networks to encode the text sequence
into a latent vector, specifically gated recurrent units (GRUs) trained
end-to-end on the collaborative filtering task. For the task of scientific
paper recommendation, this yields models with significantly higher accuracy. In
cold-start scenarios, we beat the previous state-of-the-art, all of which
ignore word order. Performance is further improved by multi-task learning,
where the text encoder network is trained for a combination of content
recommendation and item metadata prediction. This regularizes the collaborative
filtering model, ameliorating the problem of sparsity of the observed rating
matrix.Comment: 8 page
Content-based Filtering Recommendation Approach to Label Irish Legal Judgements
Machine learning approaches are applied across several domains to either simplify or automate tasks which directly result in saved time or cost. Text document labelling is one such task that requires immense human knowledge about the domain and efforts to review, understand and label the documents. The company Stare Decisis summarises legal judgements and labels them as they are made available on Irish public legal source www.courts.ie. This research presents a recommendation-based approach to reduce the time for solicitors at Stare Decisis by reducing many numbers of available labels to pick from to a concentrated few that potentially contains the relevant label for a given judgement. To solve this problem, traditional and state-of-the-art text feature representations along with K-Nearest Neighbour recommender using both cosine similarity and word mover\u27s distance are developed and compared. A series of experiments are designed starting from TF vectors and KNN recommender which is set as a baseline. Further experiments were designed after observing the results of the current experiment. Pre-trained word2vec was used in this experiment as a baseline for state-of-the-art approaches and domain specific embeddings were developed using data scraped from legal text sources
From Evaluating to Forecasting Performance: How to Turn Information Retrieval, Natural Language Processing and Recommender Systems into Predictive Sciences
We describe the state-of-the-art in performance modeling and prediction for Information Retrieval
(IR), Natural Language Processing (NLP) and Recommender Systems (RecSys) along with its
shortcomings and strengths. We present a framework for further research, identifying five major
problem areas: understanding measures, performance analysis, making underlying assumptions
explicit, identifying application features determining performance, and the development of prediction
models describing the relationship between assumptions, features and resulting performanc
- …