132 research outputs found
Neuron Sensitivity Guided Test Case Selection for Deep Learning Testing
Deep Neural Networks~(DNNs) have been widely deployed in software to address
various tasks~(e.g., autonomous driving, medical diagnosis). However, they
could also produce incorrect behaviors that result in financial losses and even
threaten human safety. To reveal the incorrect behaviors in DNN and repair
them, DNN developers often collect rich unlabeled datasets from the natural
world and label them to test the DNN models. However, properly labeling a large
number of unlabeled datasets is a highly expensive and time-consuming task.
To address the above-mentioned problem, we propose NSS, Neuron Sensitivity
guided test case Selection, which can reduce the labeling time by selecting
valuable test cases from unlabeled datasets. NSS leverages the internal
neuron's information induced by test cases to select valuable test cases, which
have high confidence in causing the model to behave incorrectly. We evaluate
NSS with four widely used datasets and four well-designed DNN models compared
to SOTA baseline methods. The results show that NSS performs well in assessing
the test cases' probability of fault triggering and model improvement
capabilities. Specifically, compared with baseline approaches, NSS obtains a
higher fault detection rate~(e.g., when selecting 5\% test case from the
unlabeled dataset in MNIST \& LeNet1 experiment, NSS can obtain 81.8\% fault
detection rate, 20\% higher than baselines)
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance
As large language models (LLMs) are continuously being developed, their
evaluation becomes increasingly important yet challenging. This work proposes
Chain-of-Thought Hub, an open-source evaluation suite on the multi-step
reasoning capabilities of large language models. We are interested in this
setting for two reasons: (1) from the behavior of GPT and PaLM model family, we
observe that complex reasoning is likely to be a key differentiator between
weaker and stronger LLMs; (2) we envisage large language models to become the
next-generation computational platform and foster an ecosystem of LLM-based new
applications, this naturally requires the foundation models to perform complex
tasks that often involve the composition of linguistic and logical operations.
Our approach is to compile a suite of challenging reasoning benchmarks to track
the progress of LLMs. Our current results show that: (1) model scale clearly
correlates with reasoning capabilities; (2) As of May 2023, Claude-v1.3 and
PaLM-2 are the only two models that are comparable with GPT-4, while
open-sourced models still lag behind; (3) LLaMA-65B performs closely to
code-davinci-002, indicating that with successful further development such as
reinforcement learning from human feedback (RLHF), it has great potential to be
close to GPT-3.5-Turbo. Our results also suggest that for the open-source
efforts to catch up, the community may focus more on building better base
models and exploring RLHF.Comment: Preprint. Code at https://github.com/FranxYao/chain-of-thought-hu
- …