3,630 research outputs found
Slowly synchronizing automata and digraphs
We present several infinite series of synchronizing automata for which the
minimum length of reset words is close to the square of the number of states.
These automata are closely related to primitive digraphs with large exponent.Comment: 13 pages, 5 figure
Primitive digraphs with large exponents and slowly synchronizing automata
We present several infinite series of synchronizing automata for which the
minimum length of reset words is close to the square of the number of states.
All these automata are tightly related to primitive digraphs with large
exponent.Comment: 23 pages, 11 figures, 3 tables. This is a translation (with a
slightly updated bibliography) of the authors' paper published in Russian in:
Zapiski Nauchnyh Seminarov POMI [Kombinatorika i Teorija Grafov. IV], Vol.
402, 9-39 (2012), see ftp://ftp.pdmi.ras.ru/pub/publicat/znsl/v402/p009.pdf
Version 2: a few typos are correcte
Complexity, parallel computation and statistical physics
The intuition that a long history is required for the emergence of complexity
in natural systems is formalized using the notion of depth. The depth of a
system is defined in terms of the number of parallel computational steps needed
to simulate it. Depth provides an objective, irreducible measure of history
applicable to systems of the kind studied in statistical physics. It is argued
that physical complexity cannot occur in the absence of substantial depth and
that depth is a useful proxy for physical complexity. The ideas are illustrated
for a variety of systems in statistical physics.Comment: 21 pages, 7 figure
A comparative analysis of parallel processing and super-individual methods for improving the computational performance of a large individual-based model
Individual-based modelling approaches are being used to simulate larger complex spatial systems in ecology and in other fields of research. Several novel model development issues now face researchers: in particular how to simulate large numbers of individuals with high levels of complexity, given finite computing resources. A case study of a spatially-explicit simulation of aphid population dynamics was used to assess two strategies for coping with a large number of individuals: the use of ‘super-individuals’ and parallel computing. Parallelisation of the model maintained the model structure and thus the simulation results were comparable to the original model. However, the super-individual implementation of the model caused significant changes to the model dynamics, both spatially and temporally. When super-individuals represented more than around 10 individuals it became evident that aggregate statistics generated from a super-individual model can hide more detailed deviations from an individual-level model. Improvements in memory use and model speed were perceived with both approaches. For the parallel approach, significant speed-up was only achieved when more than five processors were used and memory availability was only increased once five or more processors were used. The super-individual approach has potential to improve model speed and memory use dramatically, however this paper cautions the use of this approach for a density-dependent spatially-explicit model, unless individual variability is better taken into account
Wavefront Longest Common Subsequence Algorithm On Multicore And Gpgpu Platform.
String comparison is a central operation in numerous applications. It has a critical task in many operations such as data mining, spelling error correction and molecular biology (Tan et al, 2007; Michailidis and Margaritis, 2000)
- …