4,741 research outputs found
Explainable Semantic Retrieval Using Dual Encoder Large Language Models
Semantic matching utilizing large language models (LLMs) to convert text or images into embeddings and scoring them can outperform keyword matching in various ways by matching on meaning rather than word equality. However, semantic matching lacks explainability. This disclosure describes dual-encoder LLM techniques to confer explainability to LLM-based semantic matches within information retrieval systems. Semantic meanings are attached to abstract mathematical embeddings to generate gravitational fields that enable dynamic, high-quality information retrieval as measured by precision/recall, query-understanding, concept-matching, speed, scalability, etc. while providing justifications and user-visible corroborations of search results. Information retrieval is also improved in diversity, personalization, and efficiency, with high query throughput at low latency
Towards explainable evaluation of language models on the semantic similarity of visual concepts
Recent breakthroughs in NLP research, such as the advent of Transformer
models have indisputably contributed to major advancements in several tasks.
However, few works research robustness and explainability issues of their
evaluation strategies. In this work, we examine the behavior of high-performing
pre-trained language models, focusing on the task of semantic similarity for
visual vocabularies. First, we address the need for explainable evaluation
metrics, necessary for understanding the conceptual quality of retrieved
instances. Our proposed metrics provide valuable insights in local and global
level, showcasing the inabilities of widely used approaches. Secondly,
adversarial interventions on salient query semantics expose vulnerabilities of
opaque metrics and highlight patterns in learned linguistic representations
Explainable Information Retrieval: A Survey
Explainable information retrieval is an emerging research area aiming to make
transparent and trustworthy information retrieval systems. Given the increasing
use of complex machine learning models in search systems, explainability is
essential in building and auditing responsible information retrieval models.
This survey fills a vital gap in the otherwise topically diverse literature of
explainable information retrieval. It categorizes and discusses recent
explainability methods developed for different application domains in
information retrieval, providing a common framework and unifying perspectives.
In addition, it reflects on the common concern of evaluating explanations and
highlights open challenges and opportunities.Comment: 35 pages, 10 figures. Under revie
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category
Attentive Aspect Modeling for Review-aware Recommendation
In recent years, many studies extract aspects from user reviews and integrate
them with ratings for improving the recommendation performance. The common
aspects mentioned in a user's reviews and a product's reviews indicate indirect
connections between the user and product. However, these aspect-based methods
suffer from two problems. First, the common aspects are usually very sparse,
which is caused by the sparsity of user-product interactions and the diversity
of individual users' vocabularies. Second, a user's interests on aspects could
be different with respect to different products, which are usually assumed to
be static in existing methods. In this paper, we propose an Attentive
Aspect-based Recommendation Model (AARM) to tackle these challenges. For the
first problem, to enrich the aspect connections between user and product,
besides common aspects, AARM also models the interactions between synonymous
and similar aspects. For the second problem, a neural attention network which
simultaneously considers user, product and aspect information is constructed to
capture a user's attention towards aspects when examining different products.
Extensive quantitative and qualitative experiments show that AARM can
effectively alleviate the two aforementioned problems and significantly
outperforms several state-of-the-art recommendation methods on top-N
recommendation task.Comment: Camera-ready manuscript for TOI
- …