1,747 research outputs found
The Kinetic Basis of Self-Organized Pattern Formation
In his seminal paper on morphogenesis (1952), Alan Turing demonstrated that
different spatio-temporal patterns can arise due to instability of the
homogeneous state in reaction-diffusion systems, but at least two species are
necessary to produce even the simplest stationary patterns. This paper is aimed
to propose a novel model of the analog (continuous state) kinetic automaton and
to show that stationary and dynamic patterns can arise in one-component
networks of kinetic automata. Possible applicability of kinetic networks to
modeling of real-world phenomena is also discussed.Comment: 8 pages, submitted to the 14th International Conference on the
Synthesis and Simulation of Living Systems (Alife 14) on 23.03.2014, accepted
09.05.201
Bypass transition and spot nucleation in boundary layers
The spatio-temporal aspects of the transition to turbulence are considered in
the case of a boundary layer flow developing above a flat plate exposed to
free-stream turbulence. Combining results on the receptivity to free-stream
turbulence with the nonlinear concept of a transition threshold, a physically
motivated model suggests a spatial distribution of spot nucleation events. To
describe the evolution of turbulent spots a probabilistic cellular automaton is
introduced, with all parameters directly fitted from numerical simulations of
the boundary layer. The nucleation rates are then combined with the cellular
automaton model, yielding excellent quantitative agreement with the statistical
characteristics for different free-stream turbulence levels. We thus show how
the recent theoretical progress on transitional wall-bounded flows can be
extended to the much wider class of spatially developing boundary-layer flows
A Particular Bit of Universality: Scaling Limits of Some Dependent Percolation Models
We study families of dependent site percolation models on the triangular
lattice and hexagonal lattice that arise by
applying certain cellular automata to independent percolation configurations.
We analyze the scaling limit of such models and show that the distance between
macroscopic portions of cluster boundaries of any two percolation models within
one of our families goes to zero almost surely in the scaling limit. It follows
that each of these cellular automaton generated dependent percolation models
has the same scaling limit (in the sense of Aizenman-Burchard [3]) as
independent site percolation on .Comment: 25 pages, 7 figure
Recommended from our members
Lattice Boltzmann in micro- and nano- flow simulations
This paper was presented at the 2nd Micro and Nano Flows Conference (MNF2009), which was held at Brunel University, West London, UK. The conference was organised by Brunel University and supported by the Institution of Mechanical Engineers, IPEM, the Italian Union of Thermofluid dynamics, the Process Intensification Network, HEXAG - the Heat Exchange Action Group and the Institute of Mathematics and its Applications.One of the fundamental difficulties in micro- and nano-flow simulations is that the
validity’s of the continuum assumption and the hydro-dynamic equations start to become questionable in this flow regime. The lower-level kinetic/molecular alternatives are often either prohibitively expensive for practical purposes or poorly justified from a fundamental perspective. The lattice
Boltzmann (LB) method, which originated from a simplistic Boolean kinetic model, is recently shown to converge asymptotically to the continuum Boltzmann-BGK equation and therefore offers a theoretically sound and computationally effective approach for micro- and nano-flow simulations. In addition, its kinetic nature allows certain microscopic physics to be modeled at the macroscopic level, leading to a highly efficient model for multiphase flows with phase transitions. With the inherent computational advantages of a lattice model, e.g., the algorithm simplicity and parallelizability, the
ease of handling complex geometry and so on, the LB method has found many applications in various areas of Computational Fluid Dynamics (CFD) and matured to the extend of commercial applications. In this talk, I shall give an introduction to the LB method with the emphasis given to the theoretical
justifications for its applications in micro- and nano-flow simulations. Some recent examples will also be reported
Overview of crowd simulation in computer graphics
High-powered technology use computer graphics in education, entertainment, games, simulation, and virtual heritage applications has led it to become an important area of research. In simulation, according to Tecchia et al. (2002), it is important to create an interactive, complex, and realistic virtual world so that the user can have an immersive experience during navigation through the world. As the size and complexity of the environments in the virtual world increased, it becomes more necessary to populate them with peoples, and this is the reason why rendering the crowd in real-time is very crucial. Generally, crowd simulation consists of three important areas. They are realism of behavioral (Thompson and Marchant 1995), high-quality visualization (Dobbyn et al. 2005) and convergence of both areas. Realism of behavioral is mainly used for simple 2D visualizations because most of the attentions are concentrated on simulating the behaviors of the group. High quality visualization is regularly used for movie productions and computer games. It gives intention on producing more convincing visual rather than realism of behaviors. The convergences of both areas are mainly used for application like training systems. In order to make the training system more effective, the element of valid replication of the behaviors and high-quality visualization is added
A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model
Numerical models are starting to be used for determining the future behaviour
of seismic faults and fault networks. Their final goal would be to forecast
future large earthquakes. In order to use them for this task, it is necessary
to synchronize each model with the current status of the actual fault or fault
network it simulates (just as, for example, meteorologists synchronize their
models with the atmosphere by incorporating current atmospheric data in them).
However, lithospheric dynamics is largely unobservable: important parameters
cannot (or can rarely) be measured in Nature. Earthquakes, though, provide
indirect but measurable clues of the stress and strain status in the
lithosphere, which should be helpful for the synchronization of the models. The
rupture area is one of the measurable parameters of earthquakes. Here we
explore how it can be used to at least synchronize fault models between
themselves and forecast synthetic earthquakes. Our purpose here is to forecast
synthetic earthquakes in a simple but stochastic (random) fault model. By
imposing the rupture area of the synthetic earthquakes of this model on other
models, the latter become partially synchronized with the first one. We use
these partially synchronized models to successfully forecast most of the
largest earthquakes generated by the first model. This forecasting strategy
outperforms others that only take into account the earthquake series. Our
results suggest that probably a good way to synchronize more detailed models
with real faults is to force them to reproduce the sequence of previous
earthquake ruptures on the faults. This hypothesis could be tested in the
future with more detailed models and actual seismic data.Comment: Revised version. Recommended for publication in Tectonophysic
- …