27 research outputs found
Smooth and Strong PCPs
Probabilistically checkable proofs (PCPs) can be verified based only on a constant amount of random queries, such that any correct claim has a proof that is always accepted, and incorrect claims are rejected with high probability (regardless of the given alleged proof). We consider two possible features of PCPs:
- A PCP is strong if it rejects an alleged proof of a correct claim with probability proportional to its distance from some correct proof of that claim.
- A PCP is smooth if each location in a proof is queried with equal probability.
We prove that all sets in NP have PCPs that are both smooth and strong, are of polynomial length, and can be verified based on a constant number of queries. This is achieved by following the proof of the PCP theorem of Arora, Lund, Motwani, Sudan and Szegedy (JACM, 1998), providing a stronger analysis of the Hadamard and Reed - Muller based PCPs and a refined PCP composition theorem. In fact, we show that any set in NP has a smooth strong canonical PCP of Proximity (PCPP), meaning that there is an efficiently computable bijection of NP witnesses to correct proofs. This improves on the recent construction of Dinur, Gur and Goldreich (ITCS, 2019) of PCPPs that are strong canonical but inherently non-smooth.
Our result implies the hardness of approximating the satisfiability of "stable" 3CNF formulae with bounded variable occurrence, where stable means that the number of clauses violated by an assignment is proportional to its distance from a satisfying assignment (in the relative Hamming metric). This proves a hypothesis used in the work of Friggstad, Khodamoradi and Salavatipour (SODA, 2019), suggesting a connection between the hardness of these instances and other stable optimization problems
Pseudointelligence: A Unifying Framework for Language Model Evaluation
With large language models surpassing human performance on an increasing
number of benchmarks, we must take a principled approach for targeted
evaluation of model capabilities. Inspired by pseudorandomness, we propose
pseudointelligence, which captures the maxim that "(perceived) intelligence
lies in the eye of the beholder". That is, that claims of intelligence are
meaningful only when their evaluator is taken into account. Concretely, we
propose a complexity-theoretic framework of model evaluation cast as a dynamic
interaction between a model and a learned evaluator. We demonstrate that this
framework can be used to reason about two case studies in language model
evaluation, as well as analyze existing evaluation methods.Comment: EMNLP 2023 Finding
Rigid Matrices From Rectangular PCPs
We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein
proofs are thought of as square matrices, and the random coins used by the
verifier can be partitioned into two disjoint sets, one determining the row of
each query and the other determining the column.
We construct PCPs that are efficient, short, smooth and (almost-)rectangular.
As a key application, we show that proofs for hard languages in ,
when viewed as matrices, are rigid infinitely often. This strengthens and
simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit
rigid matrices in FNP. Namely, we prove the following theorem:
- There is a constant such that there is an FNP-machine
that, for infinitely many , on input outputs matrices
with entries in that are -far (in Hamming distance)
from matrices of rank at most .
Our construction of rectangular PCPs starts with an analysis of how
randomness yields queries in the Reed--Muller-based outer PCP of Ben-Sasson,
Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how
to preserve rectangularity under PCP composition and a smoothness-inducing
transformation. This warrants refined and stronger notions of rectangularity,
which we prove for the outer PCP and its transforms.Comment: 36 pages, 3 figure
A Theory of Unsupervised Translation Motivated by Understanding Animal Communication
Recent years have seen breakthroughs in neural language models that capture
nuances of language, culture, and knowledge. Neural networks are capable of
translating between languages -- in some cases even between two languages where
there is little or no access to parallel translations, in what is known as
Unsupervised Machine Translation (UMT). Given this progress, it is intriguing
to ask whether machine learning tools can ultimately enable understanding
animal communication, particularly that of highly intelligent animals. Our work
is motivated by an ambitious interdisciplinary initiative, Project CETI, which
is collecting a large corpus of sperm whale communications for machine
analysis.
We propose a theoretical framework for analyzing UMT when no parallel data
are available and when it cannot be assumed that the source and target corpora
address related subject domains or posses similar linguistic structure. The
framework requires access to a prior probability distribution that should
assign non-zero probability to possible translations. We instantiate our
framework with two models of language. Our analysis suggests that accuracy of
translation depends on the complexity of the source language and the amount of
``common ground'' between the source language and target prior.
We also prove upper bounds on the amount of data required from the source
language in the unsupervised setting as a function of the amount of data
required in a hypothetical supervised setting. Surprisingly, our bounds suggest
that the amount of source data required for unsupervised translation is
comparable to the supervised setting. For one of the language models which we
analyze we also prove a nearly matching lower bound.
Our analysis is purely information-theoretic and as such can inform how much
source data needs to be collected, but does not yield a computationally
efficient procedure
On the Communication Complexity of Secure Multi-Party Computation With Aborts
A central goal of cryptography is Secure Multi-party Computation (MPC), where
parties desire to compute a function of their joint inputs without letting
any party learn about the inputs of its peers. Unfortunately, it is well-known
that MPC guaranteeing output delivery to every party is infeasible when a
majority of the parties are malicious. In fact, parties operating over a
point-to-point network (i.e. without access to a broadcast channel) cannot even
reach an agreement on the output when more than one third of the parties are
malicious (Lamport, Shostak, and Pease, JACM 1980).
Motivated by this infeasibility in the point-to-point model, Goldwasser and
Lindell (J. Cryptol 2005) introduced a definition of MPC that does not require
agreement, referred to as MPC with selective abort. Under this definition, any
party may abort the protocol if they detect malicious behavior. They showed
that MPC with selective abort is feasible for any number of malicious parties
by implementing a broadcast functionality with abort.
While the model of MPC with abort has attracted much attention over the
years, little is known about its communication complexity over point-to-point
networks. In this work, we study the communication complexity of MPC with abort
and devise nearly-optimal communication efficient protocols in this model.
Namely, we prove trade-offs between the number of honest parties , the
communication complexity, and the locality of the protocols. Here, locality is
a bound on the number of peers with which each party must communicate.Comment: 13 pages, abstract shortened. PODC 202
A High School Camp on Algorithms and Coding in Jamaica
This is a report on JamCoders, a four-week long computer-science camp for
high school students in Jamaica. The camp teaches college-level coding and
algorithms, and targets academically excellent students in grades 9--11 (ages
14--17). Qualitative assessment shows that the camp was, in general terms, a
success. We reflect on the background and academic structure of the camp and
share key takeaways on designing and operating a successful camp. We analyze
data collected before, during and after the camp and map the effects of
demographic differences on student performance in camp. We conclude with a
discussion on possible improvements on our approach.Comment: To appear in Proceedings of the 55th ACM Technical Symposium on
Computer Science Education (SIGCSE), 202
UniMASK: Unified Inference in Sequential Decision Problems
Randomly masking and predicting word tokens has been a successful approach in
pre-training language models for a variety of downstream tasks. In this work,
we observe that the same idea also applies naturally to sequential
decision-making, where many well-studied tasks like behavior cloning, offline
reinforcement learning, inverse dynamics, and waypoint conditioning correspond
to different sequence maskings over a sequence of states, actions, and returns.
We introduce the UniMASK framework, which provides a unified way to specify
models which can be trained on many different sequential decision-making tasks.
We show that a single UniMASK model is often capable of carrying out many tasks
with performance similar to or better than single-task models. Additionally,
after fine-tuning, our UniMASK models consistently outperform comparable
single-task models. Our code is publicly available at
https://github.com/micahcarroll/uniMASK.Comment: NeurIPS 2022 (Oral). A prior version was published at an ICML
Workshop, available at arXiv:2204.1332
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve over 90\% accuracy on popular benchmarks like MMLU, limiting informed measurement of state-of-the-art LLM capabilities. In response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. HLE consists of 3,000 questions across dozens of subjects, including mathematics, humanities, and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a significant gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai