89 research outputs found
TED Talk Recommender Using Speech Transcripts
Nowadays, online video platforms mostly recommend related videos by analyzing
user-driven data such as viewing patterns, rather than the content of the
videos. However, content is more important than any other element when videos
aim to deliver knowledge. Therefore, we have developed a web application which
recommends related TED lecture videos to the users, considering the content of
the videos from the transcripts. TED Talk Recommender constructs a network for
recommending videos that are similar content-wise and providing a user
interface.Comment: 3 page
MatchZoo: A Learning, Practicing, and Developing System for Neural Text Matching
Text matching is the core problem in many natural language processing (NLP)
tasks, such as information retrieval, question answering, and conversation.
Recently, deep leaning technology has been widely adopted for text matching,
making neural text matching a new and active research domain. With a large
number of neural matching models emerging rapidly, it becomes more and more
difficult for researchers, especially those newcomers, to learn and understand
these new models. Moreover, it is usually difficult to try these models due to
the tedious data pre-processing, complicated parameter configuration, and
massive optimization tricks, not to mention the unavailability of public codes
sometimes. Finally, for researchers who want to develop new models, it is also
not an easy task to implement a neural text matching model from scratch, and to
compare with a bunch of existing models. In this paper, therefore, we present a
novel system, namely MatchZoo, to facilitate the learning, practicing and
designing of neural text matching models. The system consists of a powerful
matching library and a user-friendly and interactive studio, which can help
researchers: 1) to learn state-of-the-art neural text matching models
systematically, 2) to train, test and apply these models with simple
configurable steps; and 3) to develop their own models with rich APIs and
assistance
Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models
Is neural IR mostly hype? In a recent SIGIR Forum article, Lin expressed
skepticism that neural ranking models were actually improving ad hoc retrieval
effectiveness in limited data scenarios. He provided anecdotal evidence that
authors of neural IR papers demonstrate "wins" by comparing against weak
baselines. This paper provides a rigorous evaluation of those claims in two
ways: First, we conducted a meta-analysis of papers that have reported
experimental results on the TREC Robust04 test collection. We do not find
evidence of an upward trend in effectiveness over time. In fact, the best
reported results are from a decade ago and no recent neural approach comes
close. Second, we applied five recent neural models to rerank the strong
baselines that Lin used to make his arguments. A significant improvement was
observed for one of the models, demonstrating additivity in gains. While there
appears to be merit to neural IR approaches, at least some of the gains
reported in the literature appear illusory.Comment: Published in the Proceedings of the 42nd Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval (SIGIR
2019
- …