19 research outputs found
Immunity and Pseudorandomness of Context-Free Languages
We discuss the computational complexity of context-free languages,
concentrating on two well-known structural properties---immunity and
pseudorandomness. An infinite language is REG-immune (resp., CFL-immune) if it
contains no infinite subset that is a regular (resp., context-free) language.
We prove that (i) there is a context-free REG-immune language outside REG/n and
(ii) there is a REG-bi-immune language that can be computed deterministically
using logarithmic space. We also show that (iii) there is a CFL-simple set,
where a CFL-simple language is an infinite context-free language whose
complement is CFL-immune. Similar to the REG-immunity, a REG-primeimmune
language has no polynomially dense subsets that are also regular. We further
prove that (iv) there is a context-free language that is REG/n-bi-primeimmune.
Concerning pseudorandomness of context-free languages, we show that (v) CFL
contains REG/n-pseudorandom languages. Finally, we prove that (vi) against
REG/n, there exists an almost 1-1 pseudorandom generator computable in
nondeterministic pushdown automata equipped with a write-only output tape and
(vii) against REG, there is no almost 1-1 weakly pseudorandom generator
computable deterministically in linear time by a single-tape Turing machine.Comment: A4, 23 pages, 10 pt. A complete revision of the initial version that
was posted in February 200
The complexity of parameters for probabilistic and quantum computation
In this dissertation we study some effects of allowing computational models that use parameters whose own computational complexity has a strong effect on the computational complexity of the languages computable from the model. We show that in the probabilistic and quantum models there are parameter sets that allow one to obtain noncomputable outcomes;In Chapter 3 we define BP[beta]P the BPP class based on a coin with bias [beta]. We then show that if [beta] is BPP-computable then it is the case that BP[beta]P = BPP. We also show that each language L in P/CLog is in BP[beta]P for some [beta]. Hence there are some [beta] from which we can compute noncomputable languages. We also examine the robustness of the class BPP with respect to small variations from fairness in the coin;In Chapter 4 we consider measures that are based on polynomial-time computable sequences of biased coins in which the biases are bounded away from both zero and one (strongly positive P-sequences). We show that such a sequence [vector][beta] generates a measure [mu][vector][beta] equivalent to the uniform measure in the sense that if C is a class of languages closed under positive, polynomial-time, truth-table reductions with queries of linear length then C has [mu][vector][beta]-measure zero if and only if it has measure zero relative to the uniform measure [mu]. The classes P, NP, BPP, P/Poly, PH, and PSPACE are among those to which this result applies. Thus the measures of these much-studied classes are robust with respect to changes of this type in the underlying probability measure;In Chapter 5 we introduce the quantum computation model and the quantum complexity class BQP. We claim that the computational complexity of the amplitudes is a critical factor in determining the languages computable using the quantum model. Using results from chapter 3 we show that the quantum model can also compute noncomputable languages from some amplitude sets. Finally, we determine a restriction on the amplitude set to limit the model to the range of languages implicit in others\u27 typical meaning of the class BQP
Resource-Bounded Balanced Genericity, Stochasticity and Weak Randomness
. We introduce balanced t(n)-genericity which is a refinement of the genericity concept of Ambos-Spies, Fleischhack and Huwig [2] and which in addition controls the frequency with which a condition is met. We show that this concept coincides with the resource-bounded version of Church's stochasticity [6]. By uniformly describing these concepts and weaker notions of stochasticity introduced by Wilber [19] and Ko [11] in terms of prediction functions, we clarify the relations among these resource-bounded stochasticity concepts. Moreover, we give descriptions of these concepts in the framework of Lutz's resource-bounded measure theory [13] based on martingales: We show that t(n)-stochasticity coincides with a weak notion of t(n)-randomness based on so-called simple martingales but that it is strictly weaker than t(n)-randomness in the sense of Lutz. 1 Introduction Over the last years resource-bounded versions of Baire category and Lebesgue measure have been introduced in complexity theor..
Recommended from our members
Uncertainty Quantification in Data-Driven Simulation and Optimization: Statistical and Computational Efficiency
Models governing stochasticity in various systems are typically calibrated from data, therefore are subject to statistical errors/uncertainties which can lead to inferior decision making. This thesis develops statistically and computationally efficient data-driven methods for problems in stochastic simulation and optimization to quantify and hedge impacts of these uncertainties.
The first half of the thesis focuses on efficient methods for tackling input uncertainty which refers to the simulation output variability arising from the statistical noise in specifying the input models. Due to the convolution of the simulation noise and the input noise, existing bootstrap approaches consist of a two-layer sampling and typically require substantial simulation effort. Chapter 2 investigates a subsampling framework to reduce the required effort, by leveraging the form of the variance and its estimation error in terms of the data size and the sampling requirement in each layer. We show how the total required effort is reduced, and explicitly identify the procedural specifications in our framework that guarantee relative consistency in the estimation, and the corresponding optimal simulation budget allocations. In Chapter 3 we study an optimization-based approach to construct confidence intervals for simulation outputs under input uncertainty. This approach computes confidence bounds from simulation runs driven by probability weights defined on the data, which are obtained from solving optimization problems under suitably posited averaged divergence constraints. We illustrate how this approach offers benefits in computational efficiency and finite-sample performance compared to the bootstrap and the delta method. While resembling distributionally robust optimization, we explain the procedural design and develop tight statistical guarantees via a generalization of the empirical likelihood method.
The second half develops uncertainty quantification techniques for certifying solution feasibility and optimality in data-driven optimization. Regarding optimality, Chapter 4 proposes a statistical method to estimate the optimality gap of a given solution for stochastic optimization as an assessment of the solution quality. Our approach is based on bootstrap aggregating, or bagging, resampled sample average approximation (SAA). We show how this approach leads to valid statistical confidence bounds for non-smooth optimization. We also demonstrate its statistical efficiency and stability that are especially desirable in limited-data situations. We present our theory that views SAA as a kernel in an infinite-order symmetric statistic. Regarding feasibility, Chapter 5 considers data-driven optimization under uncertain constraints, where solution feasibility is often ensured through a "safe" reformulation of the constraints, such that an obtained solution is guaranteed feasible for the oracle formulation with high confidence. Such approaches generally involve an implicit estimation of the whole feasible set that can scale rapidly with the problem dimension, in turn leading to over-conservative solutions. We investigate validation-based strategies to avoid set estimation by exploiting the intrinsic low dimensionality of the set of all possible solutions output from a given reformulation. We demonstrate how our obtained solutions satisfy statistical feasibility guarantees with light dimension dependence, and how they are asymptotically optimal and thus regarded as the least conservative with respect to the considered reformulation classes