2,300 research outputs found
Parallel Repetition From Fortification
The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from a PCP Theorem with soundness error bounded away from 1, we get a PCP with arbitrarily small constant soundness error. In particular, starting with the combinatorial PCP of Dinur, we get a combinatorial PCP with low error. The latter can be used for hardness of approximation as in the work of Hastad. (2) Starting from the work of the author and Raz, we get a projection PCP theorem with the smallest soundness error known today. The theorem yields nearly a quadratic improvement in the size compared to previous work. We then discuss the problem of derandomizing parallel repetition, and the limitations of the fortification idea in this setting. We point out a connection between the problem of derandomizing parallel repetition and the problem of composition. This connection could shed light on the so-called Projection Games Conjecture, which asks for projection PCP with minimal error.National Science Foundation (U.S.) (Grant 1218547
ETH-Hardness of Approximating 2-CSPs and Directed Steiner Network
We study the 2-ary constraint satisfaction problems (2-CSPs), which can be
stated as follows: given a constraint graph , an alphabet set
and, for each , a constraint , the goal is to find an assignment
that satisfies as many constraints as possible, where a constraint is
satisfied if .
While the approximability of 2-CSPs is quite well understood when
is constant, many problems are still open when becomes super
constant. One such problem is whether it is hard to approximate 2-CSPs to
within a polynomial factor of . Bellare et al. (1993) suggested
that the answer to this question might be positive. Alas, despite efforts to
resolve this conjecture, it remains open to this day.
In this work, we separate and and ask a related but weaker
question: is it hard to approximate 2-CSPs to within a polynomial factor of
(while may be super-polynomial in )? Assuming the
exponential time hypothesis (ETH), we answer this question positively by
showing that no polynomial time algorithm can approximate 2-CSPs to within a
factor of . Note that our ratio is almost linear, which is
almost optimal as a trivial algorithm gives a -approximation for 2-CSPs.
Thanks to a known reduction, our result implies an ETH-hardness of
approximating Directed Steiner Network with ratio where is
the number of demand pairs. The ratio is roughly the square root of the best
known ratio achieved by polynomial time algorithms (Chekuri et al., 2011;
Feldman et al., 2012).
Additionally, under Gap-ETH, our reduction for 2-CSPs not only rules out
polynomial time algorithms, but also FPT algorithms parameterized by .
Similar statement applies for DSN parameterized by .Comment: 36 pages. A preliminary version appeared in ITCS'1
A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs
A -birthday repetition of a
two-prover game is a game in which the two provers are sent
random sets of questions from of sizes and respectively.
These two sets are sampled independently uniformly among all sets of questions
of those particular sizes. We prove the following birthday repetition theorem:
when satisfies some mild conditions, decreases exponentially in where is the total number of
questions. Our result positively resolves an open question posted by Aaronson,
Impagliazzo and Moshkovitz (CCC 2014).
As an application of our birthday repetition theorem, we obtain new
fine-grained hardness of approximation results for dense CSPs. Specifically, we
establish a tight trade-off between running time and approximation ratio for
dense CSPs by showing conditional lower bounds, integrality gaps and
approximation algorithms. In particular, for any sufficiently large and for
every , we show the following results:
- We exhibit an -approximation algorithm for dense Max -CSPs
with alphabet size via -level of Sherali-Adams relaxation.
- Through our birthday repetition theorem, we obtain an integrality gap of
for -level Lasserre relaxation for fully-dense Max
-CSP.
- Assuming that there is a constant such that Max 3SAT cannot
be approximated to within of the optimal in sub-exponential
time, our birthday repetition theorem implies that any algorithm that
approximates fully-dense Max -CSP to within a factor takes
time, almost tightly matching the algorithmic
result based on Sherali-Adams relaxation.Comment: 45 page
IOPs with Inverse Polynomial Soundness Error
We show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity. This achieves parameters that surpass all previously known PCPs and IOPs. Specifically, we construct an IOP with perfect completeness, soundness error , round complexity , proof length over an alphabet of size , and query complexity . This is a step forward in the quest to establish the sliding-scale conjecture for IOPs (which would additionally require query complexity ).
Our main technical contribution is a high-soundness small-query proximity test for the Reed-Solomon code. We construct an IOP of proximity for Reed-Solomon codes, over a field with evaluation domain and degree , with perfect completeness, soundness error (roughly) for -far functions, round complexity , proof length over , and query complexity ; here is the code rate. En route, we obtain a new high-soundness proximity test for bivariate Reed-Muller codes.
The IOP for NP is then obtained via a high-soundness reduction from NP to Reed-Solomon proximity testing with rate and distance (and applying our proximity test). Our constructions are direct and efficient, and hold the potential for practical realizations that would improve the state-of-the-art in real-world applications of IOPs
Tracking Visible Features of Speech for Computer-Based Speech Therapy for Childhood Apraxia of Speech
At present, there are few, if any, effective computer-based speech therapy systems (CBSTs) that support the at-home component for clinical interventions for Childhood Apraxia of Speech (CAS). PROMPT, an established speech therapy intervention for CAS, has the potential to be supported via a CBST, which could increase engagement and provide valuable feedback to the child. However, the necessary computational techniques have not yet been developed and evaluated. In this thesis, I will describe the development of some of the key underlying computational components that are required for the development of such a system. These components concern camera-based tracking of visible features of speech which concern jaw kinematics. These components would also be necessary for the serious game that we have envisioned
Micromechanics of fatigue in woven and stitched composites
The goals of this research program were to: (1) determine how microstructural factors, especially the architecture of reinforcing fibers, control stiffness, strength, and fatigue life in 3D woven composites; (2) identify mechanisms of failure; (3) model composite stiffness; (4) model notched and unnotched strength; and (5) model fatigue life. We have examined a total of eleven different angle and orthogonal interlock woven composites. Extensive testing has revealed that these 3D woven composites possess an extraordinary combination of strength, damage tolerance, and notch insensitivity in compression and tension and in monotonic and cyclic loading. In many important regards, 3D woven composites far outstrip conventional 2D laminates or stitched laminates. Detailed microscopic analysis of damage has led to a comprehensive picture of the essential mechanisms of failure and how they are related to the reinforcement geometry. The critical characteristics of the weave architecture that promote favorable properties have been identified. Key parameters are tow size and the distributions in space and strength of geometrical flaws. The geometrical flaws should be regarded as controllable characteristics of the weave in design and manufacture. In addressing our goals, the simplest possible models of properties were always sought, in a blend of old and new modeling concepts. Nevertheless, certain properties, especially regarding damage tolerance, ultimate failure, and the detailed effects of weave architecture, require computationally intensive stochastic modeling. We have developed a new model, the 'binary model,' to carry out such tasks in the most efficient manner and with faithful representation of crucial mechanisms. This is the final report for contract NAS1-18840. It covers all work from April 1989 up to the conclusion of the program in January 1993
Derandomized Parallel Repetition via Structured PCPs
A PCP is a proof system for NP in which the proof can be checked by a
probabilistic verifier. The verifier is only allowed to read a very small
portion of the proof, and in return is allowed to err with some bounded
probability. The probability that the verifier accepts a false proof is called
the soundness error, and is an important parameter of a PCP system that one
seeks to minimize. Constructing PCPs with sub-constant soundness error and, at
the same time, a minimal number of queries into the proof (namely two) is
especially important due to applications for inapproximability.
In this work we construct such PCP verifiers, i.e., PCPs that make only two
queries and have sub-constant soundness error. Our construction can be viewed
as a combinatorial alternative to the "manifold vs. point" construction, which
is the only construction in the literature for this parameter range. The
"manifold vs. point" PCP is based on a low degree test, while our construction
is based on a direct product test. We also extend our construction to yield a
decodable PCP (dPCP) with the same parameters. By plugging in this dPCP into
the scheme of Dinur and Harsha (FOCS 2009) one gets an alternative construction
of the result of Moshkovitz and Raz (FOCS 2008), namely: a construction of
two-query PCPs with small soundness error and small alphabet size.
Our construction of a PCP is based on extending the derandomized direct
product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized
parallel repetition theorem. More accurately, our PCP construction is obtained
in two steps. We first prove a derandomized parallel repetition theorem for
specially structured PCPs. Then, we show that any PCP can be transformed into
one that has the required structure, by embedding it on a de-Bruijn graph
Variable autonomy assignment algorithms for human-robot interactions.
As robotic agents become increasingly present in human environments, task completion rates during human-robot interaction has grown into an increasingly important topic of research. Safe collaborative robots executing tasks under human supervision often augment their perception and planning capabilities through traded or shared control schemes. However, such systems are often proscribed only at the most abstract level, with the meticulous details of implementation left to the designer\u27s prerogative. Without a rigorous structure for implementing controls, the work of design is frequently left to ad hoc mechanism with only bespoke guarantees of systematic efficacy, if any such proof is forthcoming at all. Herein, I present two quantitatively defined models for implementing sliding-scale variable autonomy, in which levels of autonomy are determined by the relative efficacy of autonomous subroutines. I experimentally test the resulting Variable Autonomy Planning (VAP) algorithm and against a traditional traded control scheme in a pick-and-place task, and apply the Variable Autonomy Tasking algorithm to the implementation of a robot performing a complex sanitation task in real-world environs. Results show that prioritizing autonomy levels with higher success rates, as encoded into VAP, allows users to effectively and intuitively select optimal autonomy levels for efficient task completion. Further, the Pareto optimal design structure of the VAP+ algorithm allows for significant performance improvements to be made through intervention planning based on systematic input determining failure probabilities through sensorized measurements. This thesis describes the design, analysis, and implementation of these two algorithms, with a particular focus on the VAP+ algorithm. The core conceit is that they are methods for rigorously defining locally optimal plans for traded control being shared between a human and one or more autonomous processes. It is derived from an earlier algorithmic model, the VAP algorithm, developed to address the issue of rigorous, repeatable assignment of autonomy levels based on system data which provides guarantees on basis of the failure-rate sorting of paired autonomous and manual subtask achievement systems. Using only probability ranking to define levels of autonomy, the VAP algorithm is able to sort modules into optimizable ordered sets, but is limited to only solving sequential task assignments. By constructing a joint cost metric for the entire plan, and by implementing a back-to-front calculation scheme for this metric, it is possible for the VAP+ algorithm to generate optimal planning solutions which minimize the expected cost, as amortized over time, funds, accuracy, or any metric combination thereof. The algorithm is additionally very efficient, and able to perform on-line assessments of environmental changes to the conditional probabilities associated with plan choices, should a suitable model for determining these probabilities be present. This system, as a paired set of two algorithms and a design augmentation, form the VAP+ algorithm in full
Algorithm Development for Hyperspectral Anomaly Detection
This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this process, one can achieve a desirably low cumulative probability of taking target samples by chance and using them as background samples. This probability is modeled by the binomial distribution family, where the only target related parameter--the proportion of target pixels potentially covering the imagery--is shown to be robust. PRS requires a suitable scoring algorithm to compare samples, although applying PRS with the new two-step univariate detectors is shown to outperform existing multivariate detectors
- …