686 research outputs found
Two Sides of the Coin Problem
In the coin problem, one is given n independent flips of a coin that has bias b > 0 towards either Head or Tail. The goal is to decide which side the coin is biased towards, with high confidence. An optimal strategy for solving the coin problem is to apply the majority function on the n samples. This simple strategy works as long as b > c(1/sqrt n) for some constant c. However, computing majority is an impossible task for several natural computational models, such as bounded width read once branching programs and AC^0 circuits.
Brody and Verbin proved that a length n, width w read once branching program cannot solve the coin problem for b < O(1/(log n)^w). This result was tightened by Steinberger to O(1/(log n)^(w-2)). The coin problem in the model of AC^0 circuits was first studied by Shaltiel and Viola, and later by Aaronson who proved that a depth d size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d+2)).
This work has two contributions:
1. We strengthen Steinberger\u27s result and show that any Santha-Vazirani source with bias b < O(1/(log n)^(w-2)) fools length n, width w read once branching programs. In other words, the strong independence assumption in the coin problem is completely redundant in the model of read once branching programs, assuming the bias remains small. That is, the exact same result holds for a much more general class of sources.
2. We tighten Aaronson\u27s result and show that a depth d, size s Boolean circuit cannot solve the coin problem for b < O(1/(log s)^(d-1)). Moreover, our proof technique is different and we believe that it is simpler and more natural
Better Pseudorandom Generators from Milder Pseudorandom Restrictions
We present an iterative approach to constructing pseudorandom generators,
based on the repeated application of mild pseudorandom restrictions. We use
this template to construct pseudorandom generators for combinatorial rectangles
and read-once CNFs and a hitting set generator for width-3 branching programs,
all of which achieve near-optimal seed-length even in the low-error regime: We
get seed-length O(log (n/epsilon)) for error epsilon. Previously, only
constructions with seed-length O(\log^{3/2} n) or O(\log^2 n) were known for
these classes with polynomially small error.
The (pseudo)random restrictions we use are milder than those typically used
for proving circuit lower bounds in that we only set a constant fraction of the
bits at a time. While such restrictions do not simplify the functions
drastically, we show that they can be derandomized using small-bias spaces.Comment: To appear in FOCS 201
Predictability, complexity and learning
We define {\em predictive information} as the mutual
information between the past and the future of a time series. Three
qualitatively different behaviors are found in the limit of large observation
times : can remain finite, grow logarithmically, or grow
as a fractional power law. If the time series allows us to learn a model with a
finite number of parameters, then grows logarithmically with
a coefficient that counts the dimensionality of the model space. In contrast,
power--law growth is associated, for example, with the learning of infinite
parameter (or nonparametric) models such as continuous functions with
smoothness constraints. There are connections between the predictive
information and measures of complexity that have been defined both in learning
theory and in the analysis of physical systems through statistical mechanics
and dynamical systems theory. Further, in the same way that entropy provides
the unique measure of available information consistent with some simple and
plausible conditions, we argue that the divergent part of
provides the unique measure for the complexity of dynamics underlying a time
series. Finally, we discuss how these ideas may be useful in different problems
in physics, statistics, and biology.Comment: 53 pages, 3 figures, 98 references, LaTeX2
Advice coins for classical and quantum computation
We study the power of classical and quantum algorithms equipped with nonuniform advice, in the form of a coin whose bias encodes useful information. This question takes on particular importance in the quantum case, due to a surprising result that we prove: a quantum finite automaton with just two states can be sensitive to arbitrarily small changes in a coin’s bias. This contrasts with classical probabilistic finite automata, whose sensitivity to changes in a coin’s bias is bounded by a classic 1970 result of Hellman and Cover.
Despite this finding, we are able to bound the power of advice coins for space-bounded classical and quantum computation. We define the classes BPPSPACE/coin and BQPSPACE/coin, of languages decidable by classical and quantum polynomial-space machines with advice coins. Our main theorem is that both classes coincide with PSPACE/poly. Proving this result turns out to require substantial machinery. We use an algorithm due to Neff for finding roots of polynomials in NC; a result from algebraic geometry that lower-bounds the separation of a polynomial’s roots; and a result on fixed-points of superoperators due to Aaronson and Watrous, originally proved in the context of quantum computing with closed timelike curves
Decomposition Enhances Reasoning via Self-Evaluation Guided Decoding
We endow Large Language Models (LLMs) with fine-grained self-evaluation to
refine multi-step reasoning inference. We propose an effective prompting
approach that integrates self-evaluation guidance through stochastic beam
search. Our approach explores the reasoning search space using a
well-calibrated automatic criterion. This enables an efficient search to
produce higher-quality final predictions. With the self-evaluation guided
stochastic beam search, we also balance the quality-diversity trade-off in the
generation of reasoning chains. This allows our approach to adapt well with
majority voting and surpass the corresponding Codex-backboned baselines by
, , and on the GSM8K, AQuA, and StrategyQA benchmarks,
respectively, in few-shot accuracy. Analysis of our decompositional reasoning
finds it pinpoints logic failures and leads to higher consistency and
robustness. Our code is publicly available at
https://github.com/YuxiXie/SelfEval-Guided-Decoding.Comment: Our code is publicly available at
https://github.com/YuxiXie/SelfEval-Guided-Decodin
AutoTrading with reinforcement learning
Treballs Finals de Grau d'Enginyeria Informà tica, Facultat de Matemà tiques, Universitat de Barcelona, Any: 2021, Director: Eloi Puertas i Prats[en] Trading is the act of studying any financial market and making money with it through buying and selling of assets. In this project, I will try to automate the actions performed by a trader without having a thorough knowledge of the financial market or trading techniques. I will use algorithms based on reinforcement learning techniques used in other fields such as robotics without human interaction in the algorithm’s execution. The main objective of this project is to investigate the feasibility of using these techniques adapted to Deep Learning and the ability to cope with the volatility of cryptocurrency. Furthermore, to show the results of these algorithms, the cryptocurrency Bitcoin and ADA will be used as a market study by obtaining the historical and making market analysis
- …