2,165 research outputs found
Internal Diffusion-Limited Aggregation: Parallel Algorithms and Complexity
The computational complexity of internal diffusion-limited aggregation (DLA)
is examined from both a theoretical and a practical point of view. We show that
for two or more dimensions, the problem of predicting the cluster from a given
set of paths is complete for the complexity class CC, the subset of P
characterized by circuits composed of comparator gates. CC-completeness is
believed to imply that, in the worst case, growing a cluster of size n requires
polynomial time in n even on a parallel computer.
A parallel relaxation algorithm is presented that uses the fact that clusters
are nearly spherical to guess the cluster from a given set of paths, and then
corrects defects in the guessed cluster through a non-local annihilation
process. The parallel running time of the relaxation algorithm for
two-dimensional internal DLA is studied by simulating it on a serial computer.
The numerical results are compatible with a running time that is either
polylogarithmic in n or a small power of n. Thus the computational resources
needed to grow large clusters are significantly less on average than the
worst-case analysis would suggest.
For a parallel machine with k processors, we show that random clusters in d
dimensions can be generated in O((n/k + log k) n^{2/d}) steps. This is a
significant speedup over explicit sequential simulation, which takes
O(n^{1+2/d}) time on average.
Finally, we show that in one dimension internal DLA can be predicted in O(log
n) parallel time, and so is in the complexity class NC
The identification of cellular automata
Although cellular automata have been widely studied as a class of the spatio temporal systems, very few investigators have studied how to identify the CA rules given observations of the patterns. A solution using a polynomial realization to describe the CA rule is reviewed in the present study based on the application of an orthogonal least squares algorithm. Three new neighbourhood detection methods are then reviewed as important preliminary analysis procedures to reduce the complexity of the estimation. The identification of excitable media is discussed using simulation examples and real data sets and a new method for the identification of
hybrid CA is introduced
Advantages and challenges of programming the Micron Automata Processor
Non-Von Neumann computer architectures are being explored for acceleration of difficult problems. The Automata Processor is a unique non Von Neumann architecture capable of efficient modeling and execution of non-deterministic finite automata. The Automata Processor is shown to be excellent in string comparison operations, specifically with regard to bioinformatics problems. A greatly accelerated solution for Prosite pattern matching using the Automata Processor called PROTOMOTA is presented. Furthermore, a developers\u27 guide detailing the lessons learnt while designing and implementing PROTOMOTA is provided. It is hoped that the developers\u27 guide would aid future developers to avoid critical pitfalls, while exploiting the capabilities of the Automata Processor to the fullest
A Survey of Cellular Automata: Types, Dynamics, Non-uniformity and Applications
Cellular automata (CAs) are dynamical systems which exhibit complex global
behavior from simple local interaction and computation. Since the inception of
cellular automaton (CA) by von Neumann in 1950s, it has attracted the attention
of several researchers over various backgrounds and fields for modelling
different physical, natural as well as real-life phenomena. Classically, CAs
are uniform. However, non-uniformity has also been introduced in update
pattern, lattice structure, neighborhood dependency and local rule. In this
survey, we tour to the various types of CAs introduced till date, the different
characterization tools, the global behaviors of CAs, like universality,
reversibility, dynamics etc. Special attention is given to non-uniformity in
CAs and especially to non-uniform elementary CAs, which have been very useful
in solving several real-life problems.Comment: 43 pages; Under review in Natural Computin
Sequentializing Parameterized Programs
We exhibit assertion-preserving (reachability preserving) transformations
from parameterized concurrent shared-memory programs, under a k-round
scheduling of processes, to sequential programs. The salient feature of the
sequential program is that it tracks the local variables of only one thread at
any point, and uses only O(k) copies of shared variables (it does not use extra
counters, not even one counter to keep track of the number of threads).
Sequentialization is achieved using the concept of a linear interface that
captures the effect an unbounded block of processes have on the shared state in
a k-round schedule. Our transformation utilizes linear interfaces to
sequentialize the program, and to ensure the sequential program explores only
reachable states and preserves local invariants.Comment: In Proceedings FIT 2012, arXiv:1207.348
Custom Integrated Circuits
Contains reports on ten research projects.Analog Devices, Inc.IBM CorporationNational Science Foundation/Defense Advanced Research Projects Agency Grant MIP 88-14612Analog Devices Career Development Assistant ProfessorshipU.S. Navy - Office of Naval Research Contract N0014-87-K-0825AT&TDigital Equipment CorporationNational Science Foundation Grant MIP 88-5876
Modeling and Simulation of Spark Streaming
As more and more devices connect to Internet of Things, unbounded streams of
data will be generated, which have to be processed "on the fly" in order to
trigger automated actions and deliver real-time services. Spark Streaming is a
popular realtime stream processing framework. To make efficient use of Spark
Streaming and achieve stable stream processing, it requires a careful interplay
between different parameter configurations. Mistakes may lead to significant
resource overprovisioning and bad performance. To alleviate such issues, this
paper develops an executable and configurable model named SSP (stands for Spark
Streaming Processing) to model and simulate Spark Streaming. SSP is written in
ABS, which is a formal, executable, and object-oriented language for modeling
distributed systems by means of concurrent object groups. SSP allows users to
rapidly evaluate and compare different parameter configurations without
deploying their applications on a cluster/cloud. The simulation results show
that SSP is able to mimic Spark Streaming in different scenarios.Comment: 7 pages and 13 figures. This paper is published in IEEE 32nd
International Conference on Advanced Information Networking and Applications
(AINA 2018
- …