468 research outputs found
Sustainability Via Servicing: From Individual Action To Institutional Action
The servicizing of products constitutes a powerful tool to reduce the environmental footprint of the stages of a product’s physical resources life cycle, ultimately to yield a more sustainable solution. It can be achieved via the co-creation of various clean services (CleanServs) by individuals. But to achieve the goal of sustainable consumption will require increasing the pace of development of organized and mass-use frameworks like, for example, shareconomy and eco-labeling. In this frame, the notion of the product-service system (PSS), which offers access to a solution rather than ownership of the goods or assets needed for that solution, also promotes greater responsibility and higher levels of obligation on the parts of both provider and customer
Adaptive Data Depth via Multi-Armed Bandits
Data depth, introduced by Tukey (1975), is an important tool in data science,
robust statistics, and computational geometry. One chief barrier to its broader
practical utility is that many common measures of depth are computationally
intensive, requiring on the order of operations to exactly compute the
depth of a single point within a data set of points in -dimensional
space. Often however, we are not directly interested in the absolute depths of
the points, but rather in their relative ordering. For example, we may want to
find the most central point in a data set (a generalized median), or to
identify and remove all outliers (points on the fringe of the data set with low
depth). With this observation, we develop a novel and instance-adaptive
algorithm for adaptive data depth computation by reducing the problem of
exactly computing depths to an -armed stochastic multi-armed bandit
problem which we can efficiently solve. We focus our exposition on simplicial
depth, developed by Liu (1990), which has emerged as a promising notion of
depth due to its interpretability and asymptotic properties. We provide general
instance-dependent theoretical guarantees for our proposed algorithms, which
readily extend to many other common measures of data depth including majority
depth, Oja depth, and likelihood depth. When specialized to the case where the
gaps in the data follow a power law distribution with parameter , we
show that we can reduce the complexity of identifying the deepest point in the
data set (the simplicial median) from to
, where suppresses logarithmic
factors. We corroborate our theoretical results with numerical experiments on
synthetic data, showing the practical utility of our proposed methods.Comment: Keywords: multi-armed bandits, data depth, adaptivity, large-scale
computation, simplicial dept
Reliable and Interpretable Drift Detection in Streams of Short Texts
Data drift is the change in model input data that is one of the key factors
leading to machine learning models performance degradation over time.
Monitoring drift helps detecting these issues and preventing their harmful
consequences. Meaningful drift interpretation is a fundamental step towards
effective re-training of the model. In this study we propose an end-to-end
framework for reliable model-agnostic change-point detection and interpretation
in large task-oriented dialog systems, proven effective in multiple customer
deployments. We evaluate our approach and demonstrate its benefits with a novel
variant of intent classification training dataset, simulating customer requests
to a dialog system. We make the data publicly available.Comment: ACL2023 industry track (9 pages
Predicting Question-Answering Performance of Large Language Models through Semantic Consistency
Semantic consistency of a language model is broadly defined as the model's
ability to produce semantically-equivalent outputs, given
semantically-equivalent inputs. We address the task of assessing
question-answering (QA) semantic consistency of contemporary large language
models (LLMs) by manually creating a benchmark dataset with high-quality
paraphrases for factual questions, and release the dataset to the community.
We further combine the semantic consistency metric with additional
measurements suggested in prior work as correlating with LLM QA accuracy, for
building and evaluating a framework for factual QA reference-less performance
prediction -- predicting the likelihood of a language model to accurately
answer a question. Evaluating the framework on five contemporary LLMs, we
demonstrate encouraging, significantly outperforming baselines, results.Comment: EMNLP2023 GEM workshop, 17 page
Classifier Data Quality: A Geometric Complexity Based Method for Automated Baseline And Insights Generation
Testing Machine Learning (ML) models and AI-Infused Applications (AIIAs), or
systems that contain ML models, is highly challenging. In addition to the
challenges of testing classical software, it is acceptable and expected that
statistical ML models sometimes output incorrect results. A major challenge is
to determine when the level of incorrectness, e.g., model accuracy or F1 score
for classifiers, is acceptable and when it is not. In addition to business
requirements that should provide a threshold, it is a best practice to require
any proposed ML solution to out-perform simple baseline models, such as a
decision tree.
We have developed complexity measures, which quantify how difficult given
observations are to assign to their true class label; these measures can then
be used to automatically determine a baseline performance threshold. These
measures are superior to the best practice baseline in that, for a linear
computation cost, they also quantify each observation' classification
complexity in an explainable form, regardless of the classifier model used. Our
experiments with both numeric synthetic data and real natural language chatbot
data demonstrate that the complexity measures effectively highlight data
regions and observations that are likely to be misclassified.Comment: Accepted to EDSMLS workshop at AAAI conferenc
- …