928 research outputs found
Pseudorandom Generators for Width-3 Branching Programs
We construct pseudorandom generators of seed length that -fool ordered read-once branching programs
(ROBPs) of width and length . For unordered ROBPs, we construct
pseudorandom generators with seed length . This is the first improvement for pseudorandom
generators fooling width ROBPs since the work of Nisan [Combinatorica,
1992].
Our constructions are based on the `iterated milder restrictions' approach of
Gopalan et al. [FOCS, 2012] (which further extends the Ajtai-Wigderson
framework [FOCS, 1985]), combined with the INW-generator [STOC, 1994] at the
last step (as analyzed by Braverman et al. [SICOMP, 2014]). For the unordered
case, we combine iterated milder restrictions with the generator of
Chattopadhyay et al. [CCC, 2018].
Two conceptual ideas that play an important role in our analysis are: (1) A
relabeling technique allowing us to analyze a relabeled version of the given
branching program, which turns out to be much easier. (2) Treating the number
of colliding layers in a branching program as a progress measure and showing
that it reduces significantly under pseudorandom restrictions.
In addition, we achieve nearly optimal seed-length
for the classes of: (1) read-once polynomials on
variables, (2) locally-monotone ROBPs of length and width
(generalizing read-once CNFs and DNFs), and (3) constant-width ROBPs of length
having a layer of width in every consecutive
layers.Comment: 51 page
A Mathematical Approach to the Study of the United States Code
The United States Code (Code) is a document containing over 22 million words
that represents a large and important source of Federal statutory law. Scholars
and policy advocates often discuss the direction and magnitude of changes in
various aspects of the Code. However, few have mathematically formalized the
notions behind these discussions or directly measured the resulting
representations. This paper addresses the current state of the literature in
two ways. First, we formalize a representation of the United States Code as the
union of a hierarchical network and a citation network over vertices containing
the language of the Code. This representation reflects the fact that the Code
is a hierarchically organized document containing language and explicit
citations between provisions. Second, we use this formalization to measure
aspects of the Code as codified in October 2008, November 2009, and March 2010.
These measurements allow for a characterization of the actual changes in the
Code over time. Our findings indicate that in the recent past, the Code has
grown in its amount of structure, interdependence, and language.Comment: 5 pages, 6 figures, 2 tables
Visual search in ecological and non-ecological displays: Evidence for a non-monotonic effect of complexity on performance
Copyright @ 2013 PLoSThis article has been made available through the Brunel Open Access Publishing Fund.Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate) with chess players of two levels of expertise (novices and club players). Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions) and ecologically valid (game positions) stimuli: With artificial, but not with ecologically valid stimuli, a âpop outâ effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli.This study is funded by Brunel University and the article is made available through the Brunel Open Access Publishing Fund
Pseudorandomness for Regular Branching Programs via Fourier Analysis
We present an explicit pseudorandom generator for oblivious, read-once,
permutation branching programs of constant width that can read their input bits
in any order. The seed length is , where is the length of the
branching program. The previous best seed length known for this model was
, which follows as a special case of a generator due to
Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of
for arbitrary branching programs of size ). Our techniques
also give seed length for general oblivious, read-once branching
programs of width , which is incomparable to the results of
Impagliazzo et al.Our pseudorandom generator is similar to the one used by
Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite
different; ours is based on Fourier analysis of branching programs. In
particular, we show that an oblivious, read-once, regular branching program of
width has Fourier mass at most at level , independent of the
length of the program.Comment: RANDOM 201
A general lower bound for collaborative tree exploration
We consider collaborative graph exploration with a set of agents. All
agents start at a common vertex of an initially unknown graph and need to
collectively visit all other vertices. We assume agents are deterministic,
vertices are distinguishable, moves are simultaneous, and we allow agents to
communicate globally. For this setting, we give the first non-trivial lower
bounds that bridge the gap between small () and large () teams of agents. Remarkably, our bounds tightly connect to existing results
in both domains.
First, we significantly extend a lower bound of
by Dynia et al. on the competitive ratio of a collaborative tree exploration
strategy to the range for any . Second,
we provide a tight lower bound on the number of agents needed for any
competitive exploration algorithm. In particular, we show that any
collaborative tree exploration algorithm with agents has a
competitive ratio of , while Dereniowski et al. gave an algorithm
with agents and competitive ratio , for any
and with denoting the diameter of the graph. Lastly, we
show that, for any exploration algorithm using agents, there exist
trees of arbitrarily large height that require rounds, and we
provide a simple algorithm that matches this bound for all trees
Antibiotic Resistance Patterns of Bacterial Isolates from Blood in San Francisco County, California, 1996-1999
Countywide antibiotic resistance patterns may provide additional information from that obtained from national sampling or individual hospitals. We reviewed susceptibility patterns of selected bacterial strains isolated from blood in San Francisco County from January 1996 to March 1999. We found substantial hospital-to-hospital variability in proportional resistance to antibiotics in multiple organisms. This variability was not correlated with hospital indices such as number of intensive care unit or total beds, annual admissions, or average length of stay. We also found a significant increase in methicillin-resistant Staphylococcus aureus, vancomycin-resistant Enterococcus, and proportional resistance to multiple antipseudomonal antibiotics. We describe the utility, difficulties, and limitations of countywide surveillance
Modulus Computational Entropy
The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable in case the adversary who having already learned some random
variables correlated with , obtains some further
information about . Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on .
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.Comment: Accepted at ICTS 201
- âŠ