2,556 research outputs found
Theory of spike timing based neural classifiers
We study the computational capacity of a model neuron, the Tempotron, which
classifies sequences of spikes by linear-threshold operations. We use
statistical mechanics and extreme value theory to derive the capacity of the
system in random classification tasks. In contrast to its static analog, the
Perceptron, the Tempotron's solutions space consists of a large number of small
clusters of weight vectors. The capacity of the system per synapse is finite in
the large size limit and weakly diverges with the stimulus duration relative to
the membrane and synaptic time constants.Comment: 4 page, 4 figures, Accepted to Physical Review Letters on 19th Oct.
201
Zero-Reachability in Probabilistic Multi-Counter Automata
We study the qualitative and quantitative zero-reachability problem in
probabilistic multi-counter systems. We identify the undecidable variants of
the problems, and then we concentrate on the remaining two cases. In the first
case, when we are interested in the probability of all runs that visit zero in
some counter, we show that the qualitative zero-reachability is decidable in
time which is polynomial in the size of a given pMC and doubly exponential in
the number of counters. Further, we show that the probability of all
zero-reaching runs can be effectively approximated up to an arbitrarily small
given error epsilon > 0 in time which is polynomial in log(epsilon),
exponential in the size of a given pMC, and doubly exponential in the number of
counters. In the second case, we are interested in the probability of all runs
that visit zero in some counter different from the last counter. Here we show
that the qualitative zero-reachability is decidable and SquareRootSum-hard, and
the probability of all zero-reaching runs can be effectively approximated up to
an arbitrarily small given error epsilon > 0 (these result applies to pMC
satisfying a suitable technical condition that can be verified in polynomial
time). The proof techniques invented in the second case allow to construct
counterexamples for some classical results about ergodicity in stochastic Petri
nets.Comment: 20 page
Time lower bounds for nonadaptive turnstile streaming algorithms
We say a turnstile streaming algorithm is "non-adaptive" if, during updates,
the memory cells written and read depend only on the index being updated and
random coins tossed at the beginning of the stream (and not on the memory
contents of the algorithm). Memory cells read during queries may be decided
upon adaptively. All known turnstile streaming algorithms in the literature are
non-adaptive.
We prove the first non-trivial update time lower bounds for both randomized
and deterministic turnstile streaming algorithms, which hold when the
algorithms are non-adaptive. While there has been abundant success in proving
space lower bounds, there have been no non-trivial update time lower bounds in
the turnstile model. Our lower bounds hold against classically studied problems
such as heavy hitters, point query, entropy estimation, and moment estimation.
In some cases of deterministic algorithms, our lower bounds nearly match known
upper bounds
Probabilistic Timed Automata with Clock-Dependent Probabilities
Probabilistic timed automata are classical timed automata extended with
discrete probability distributions over edges. We introduce clock-dependent
probabilistic timed automata, a variant of probabilistic timed automata in
which transition probabilities can depend linearly on clock values.
Clock-dependent probabilistic timed automata allow the modelling of a
continuous relationship between time passage and the likelihood of system
events. We show that the problem of deciding whether the maximum probability of
reaching a certain location is above a threshold is undecidable for
clock-dependent probabilistic timed automata. On the other hand, we show that
the maximum and minimum probability of reaching a certain location in
clock-dependent probabilistic timed automata can be approximated using a
region-graph-based approach.Comment: Full version of a paper published at RP 201
Is This a Joke? Detecting Humor in Spanish Tweets
While humor has been historically studied from a psychological, cognitive and
linguistic standpoint, its study from a computational perspective is an area
yet to be explored in Computational Linguistics. There exist some previous
works, but a characterization of humor that allows its automatic recognition
and generation is far from being specified. In this work we build a
crowdsourced corpus of labeled tweets, annotated according to its humor value,
letting the annotators subjectively decide which are humorous. A humor
classifier for Spanish tweets is assembled based on supervised learning,
reaching a precision of 84% and a recall of 69%.Comment: Preprint version, without referra
Scalability and Total Recall with Fast CoveringLSH
Locality-sensitive hashing (LSH) has emerged as the dominant algorithmic
technique for similarity search with strong performance guarantees in
high-dimensional spaces. A drawback of traditional LSH schemes is that they may
have \emph{false negatives}, i.e., the recall is less than 100\%. This limits
the applicability of LSH in settings requiring precise performance guarantees.
Building on the recent theoretical "CoveringLSH" construction that eliminates
false negatives, we propose a fast and practical covering LSH scheme for
Hamming space called \emph{Fast CoveringLSH (fcLSH)}. Inheriting the design
benefits of CoveringLSH our method avoids false negatives and always reports
all near neighbors. Compared to CoveringLSH we achieve an asymptotic
improvement to the hash function computation time from to
, where is the dimensionality of data and is
the number of hash tables. Our experiments on synthetic and real-world data
sets demonstrate that \emph{fcLSH} is comparable (and often superior) to
traditional hashing-based approaches for search radius up to 20 in
high-dimensional Hamming space.Comment: Short version appears in Proceedings of CIKM 201
Magic number 7 2 in networks of threshold dynamics
Information processing by random feed-forward networks consisting of units
with sigmoidal input-output response is studied by focusing on the dependence
of its outputs on the number of parallel paths M. It is found that the system
leads to a combination of on/off outputs when , while for , chaotic dynamics arises, resulting in a continuous distribution of
outputs. This universality of the critical number is explained by
combinatorial explosion, i.e., dominance of factorial over exponential
increase. Relevance of the result to the psychological magic number
is briefly discussed.Comment: 6 pages, 5 figure
Interprocedural Reachability for Flat Integer Programs
We study programs with integer data, procedure calls and arbitrary call
graphs. We show that, whenever the guards and updates are given by octagonal
relations, the reachability problem along control flow paths within some
language w1* ... wd* over program statements is decidable in Nexptime. To
achieve this upper bound, we combine a program transformation into the same
class of programs but without procedures, with an Np-completeness result for
the reachability problem of procedure-less programs. Besides the program, the
expression w1* ... wd* is also mapped onto an expression of a similar form but
this time over the transformed program statements. Several arguments involving
context-free grammars and their generative process enable us to give tight
bounds on the size of the resulting expression. The currently existing gap
between Np-hard and Nexptime can be closed to Np-complete when a certain
parameter of the analysis is assumed to be constant.Comment: 38 pages, 1 figur
Sources of Financial Fragility: Financial Factors in the Economics of Capitalism
Originally, paper prepared for the conference, Coping with Financial Fragility: A Global Perspective, 7-9 September 1994, Maasdricht. (sic)
Also included are handwritten and word processed pages showing original drafts of the paper
A Scalable Byzantine Grid
Modern networks assemble an ever growing number of nodes. However, it remains
difficult to increase the number of channels per node, thus the maximal degree
of the network may be bounded. This is typically the case in grid topology
networks, where each node has at most four neighbors. In this paper, we address
the following issue: if each node is likely to fail in an unpredictable manner,
how can we preserve some global reliability guarantees when the number of nodes
keeps increasing unboundedly ? To be more specific, we consider the problem or
reliably broadcasting information on an asynchronous grid in the presence of
Byzantine failures -- that is, some nodes may have an arbitrary and potentially
malicious behavior. Our requirement is that a constant fraction of correct
nodes remain able to achieve reliable communication. Existing solutions can
only tolerate a fixed number of Byzantine failures if they adopt a worst-case
placement scheme. Besides, if we assume a constant Byzantine ratio (each node
has the same probability to be Byzantine), the probability to have a fatal
placement approaches 1 when the number of nodes increases, and reliability
guarantees collapse. In this paper, we propose the first broadcast protocol
that overcomes these difficulties. First, the number of Byzantine failures that
can be tolerated (if they adopt the worst-case placement) now increases with
the number of nodes. Second, we are able to tolerate a constant Byzantine
ratio, however large the grid may be. In other words, the grid becomes
scalable. This result has important security applications in ultra-large
networks, where each node has a given probability to misbehave.Comment: 17 page
- …