10 research outputs found
Reliable and Interpretable Drift Detection in Streams of Short Texts
Data drift is the change in model input data that is one of the key factors
leading to machine learning models performance degradation over time.
Monitoring drift helps detecting these issues and preventing their harmful
consequences. Meaningful drift interpretation is a fundamental step towards
effective re-training of the model. In this study we propose an end-to-end
framework for reliable model-agnostic change-point detection and interpretation
in large task-oriented dialog systems, proven effective in multiple customer
deployments. We evaluate our approach and demonstrate its benefits with a novel
variant of intent classification training dataset, simulating customer requests
to a dialog system. We make the data publicly available.Comment: ACL2023 industry track (9 pages
Classifier Data Quality: A Geometric Complexity Based Method for Automated Baseline And Insights Generation
Testing Machine Learning (ML) models and AI-Infused Applications (AIIAs), or
systems that contain ML models, is highly challenging. In addition to the
challenges of testing classical software, it is acceptable and expected that
statistical ML models sometimes output incorrect results. A major challenge is
to determine when the level of incorrectness, e.g., model accuracy or F1 score
for classifiers, is acceptable and when it is not. In addition to business
requirements that should provide a threshold, it is a best practice to require
any proposed ML solution to out-perform simple baseline models, such as a
decision tree.
We have developed complexity measures, which quantify how difficult given
observations are to assign to their true class label; these measures can then
be used to automatically determine a baseline performance threshold. These
measures are superior to the best practice baseline in that, for a linear
computation cost, they also quantify each observation' classification
complexity in an explainable form, regardless of the classifier model used. Our
experiments with both numeric synthetic data and real natural language chatbot
data demonstrate that the complexity measures effectively highlight data
regions and observations that are likely to be misclassified.Comment: Accepted to EDSMLS workshop at AAAI conferenc
Predicting Question-Answering Performance of Large Language Models through Semantic Consistency
Semantic consistency of a language model is broadly defined as the model's
ability to produce semantically-equivalent outputs, given
semantically-equivalent inputs. We address the task of assessing
question-answering (QA) semantic consistency of contemporary large language
models (LLMs) by manually creating a benchmark dataset with high-quality
paraphrases for factual questions, and release the dataset to the community.
We further combine the semantic consistency metric with additional
measurements suggested in prior work as correlating with LLM QA accuracy, for
building and evaluating a framework for factual QA reference-less performance
prediction -- predicting the likelihood of a language model to accurately
answer a question. Evaluating the framework on five contemporary LLMs, we
demonstrate encouraging, significantly outperforming baselines, results.Comment: EMNLP2023 GEM workshop, 17 page
Understanding the Properties of Generated Corpora
Models for text generation have become focal for many research tasks and
especially for the generation of sentence corpora. However, understanding the
properties of an automatically generated text corpus remains challenging. We
propose a set of tools that examine the properties of generated text corpora.
Applying these tools on various generated corpora allowed us to gain new
insights into the properties of the generative models. As part of our
characterization process, we found remarkable differences in the corpora
generated by two leading generative technologies
Self-organizing maps for multi-objective pareto frontiers
Decision makers often need to take into account multiple conflict-ing objectives when selecting a solution for their problem. This can result in a potentially large number of candidate solutions to be considered. Visualizing a Pareto Frontier, the optimal set of so-lutions to a multi-objective problem, is considered a difficult task when the problem at hand spans more than three objective func-tions. We introduce a novel visual-interactive approach to facilitate coping with multi-objective problems. We propose a characteriza-tion of the Pareto Frontier data and the tasks decision makers face as they reach their decisions. Following a comprehensive analysis of the design alternatives, we show how a semantically-enhanced Self-Organizing Map, can be utilized to meet the identified tasks. We argue that our newly proposed design provides both consis-tent orientation of the 2D mapping as well as an appropriate vi-sual representation of individual solutions. We then demonstrate its applicability with two real-world multi-objective case studies. We conclude with a preliminary empirical evaluation and a qualitative usefulness assessment