2,896 research outputs found
An implicit algorithm for validated enclosures of the solutions to variational equations for ODEs
We propose a new algorithm for computing validated bounds for the solutions
to the first order variational equations associated to ODEs. These validated
solutions are the kernel of numerics computer-assisted proofs in dynamical
systems literature. The method uses a high-order Taylor method as a predictor
step and an implicit method based on the Hermite-Obreshkov interpolation as a
corrector step. The proposed algorithm is an improvement of the -Lohner
algorithm proposed by Zgliczy\'nski and it provides sharper bounds.
As an application of the algorithm, we give a computer-assisted proof of the
existence of an attractor set in the R\"ossler system, and we show that the
attractor contains an invariant and uniformly hyperbolic subset on which the
dynamics is chaotic, that is, conjugated to subshift of finite type with
positive topological entropy.Comment: 33 pages, 11 figure
Complexity transitions in global algorithms for sparse linear systems over finite fields
We study the computational complexity of a very basic problem, namely that of
finding solutions to a very large set of random linear equations in a finite
Galois Field modulo q. Using tools from statistical mechanics we are able to
identify phase transitions in the structure of the solution space and to
connect them to changes in performance of a global algorithm, namely Gaussian
elimination. Crossing phase boundaries produces a dramatic increase in memory
and CPU requirements necessary to the algorithms. In turn, this causes the
saturation of the upper bounds for the running time. We illustrate the results
on the specific problem of integer factorization, which is of central interest
for deciphering messages encrypted with the RSA cryptosystem.Comment: 23 pages, 8 figure
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Characterization for entropy of shifts of finite type on Cayley trees
The notion of tree-shifts constitutes an intermediate class in between
one-sided shift spaces and multidimensional ones. This paper proposes an
algorithm for computing of the entropy of a tree-shift of finite type.
Meanwhile, the entropy of a tree-shift of finite type is for some , where is a Perron number. This
extends Lind's work on one-dimensional shifts of finite type. As an
application, the entropy minimality problem is investigated, and we obtain the
necessary and sufficient condition for a tree-shift of finite type being
entropy minimal with some additional conditions
- …