6 research outputs found
BizBench: A Quantitative Reasoning Benchmark for Business and Finance
As large language models (LLMs) impact a growing number of complex domains,
it is becoming increasingly important to have fair, accurate, and rigorous
evaluation benchmarks. Evaluating the reasoning skills required for business
and financial NLP stands out as a particularly difficult challenge. We
introduce BizBench, a new benchmark for evaluating models' ability to reason
about realistic financial problems. BizBench comprises 8 quantitative reasoning
tasks. Notably, BizBench targets the complex task of question-answering (QA)
for structured and unstructured financial data via program synthesis (i.e.,
code generation). We introduce three diverse financially-themed code-generation
tasks from newly collected and augmented QA data. Additionally, we isolate
distinct financial reasoning capabilities required to solve these QA tasks:
reading comprehension of financial text and tables, which is required to
extract correct intermediate values; and understanding domain knowledge (e.g.,
financial formulas) needed to calculate complex solutions. Collectively, these
tasks evaluate a model's financial background knowledge, ability to extract
numeric entities from financial documents, and capacity to solve problems with
code. We conduct an in-depth evaluation of open-source and commercial LLMs,
illustrating that BizBench is a challenging benchmark for quantitative
reasoning in the finance and business domain.Comment: Work in progres
DocFinQA: A Long-Context Financial Reasoning Dataset
For large language models (LLMs) to be effective in the financial domain --
where each decision can have a significant impact -- it is necessary to
investigate realistic tasks and data. Financial professionals often interact
with documents that are hundreds of pages long, but most financial research
datasets only deal with short excerpts from these documents. To address this,
we introduce a long-document financial QA task. We augment 7,437 questions from
the existing FinQA dataset with the full-document context, extending the
average context length from under 700 words in FinQA to 123k words in DocFinQA.
We conduct extensive experiments over retrieval-based QA pipelines and
long-context language models. DocFinQA proves a significant challenge for even
state-of-the-art systems. We also provide a case-study on the longest documents
in DocFinQA and find that models particularly struggle on these documents.
Addressing these challenges may have a wide reaching impact across applications
where specificity and long-range contexts are critical, like gene sequences and
legal document contract analysis.Comment: 13 page
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection
Physical adversarial attacks threaten to fool object detection systems, but
reproducible research on the real-world effectiveness of physical patches and
how to defend against them requires a publicly available benchmark dataset. We
present APRICOT, a collection of over 1,000 annotated photographs of printed
adversarial patches in public locations. The patches target several object
categories for three COCO-trained detection models, and the photos represent
natural variation in position, distance, lighting conditions, and viewing
angle. Our analysis suggests that maintaining adversarial robustness in
uncontrolled settings is highly challenging, but it is still possible to
produce targeted detections under white-box and sometimes black-box settings.
We establish baselines for defending against adversarial patches through
several methods, including a detector supervised with synthetic data and
unsupervised methods such as kernel density estimation, Bayesian uncertainty,
and reconstruction error. Our results suggest that adversarial patches can be
effectively flagged, both in a high-knowledge, attack-specific scenario, and in
an unsupervised setting where patches are detected as anomalies in natural
images. This dataset and the described experiments provide a benchmark for
future research on the effectiveness of and defenses against physical
adversarial objects in the wild.Comment: 23 pages, 14 figures, 3 tables. Updated version as accepted to ECCV
202