1,180 research outputs found

    Boolean Delay Equations: A simple way of looking at complex systems

    Full text link
    Boolean Delay Equations (BDEs) are semi-discrete dynamical models with Boolean-valued variables that evolve in continuous time. Systems of BDEs can be classified into conservative or dissipative, in a manner that parallels the classification of ordinary or partial differential equations. Solutions to certain conservative BDEs exhibit growth of complexity in time. They represent therewith metaphors for biological evolution or human history. Dissipative BDEs are structurally stable and exhibit multiple equilibria and limit cycles, as well as more complex, fractal solution sets, such as Devil's staircases and ``fractal sunbursts``. All known solutions of dissipative BDEs have stationary variance. BDE systems of this type, both free and forced, have been used as highly idealized models of climate change on interannual, interdecadal and paleoclimatic time scales. BDEs are also being used as flexible, highly efficient models of colliding cascades in earthquake modeling and prediction, as well as in genetics. In this paper we review the theory of systems of BDEs and illustrate their applications to climatic and solid earth problems. The former have used small systems of BDEs, while the latter have used large networks of BDEs. We moreover introduce BDEs with an infinite number of variables distributed in space (``partial BDEs``) and discuss connections with other types of dynamical systems, including cellular automata and Boolean networks. This research-and-review paper concludes with a set of open questions.Comment: Latex, 67 pages with 15 eps figures. Revised version, in particular the discussion on partial BDEs is updated and enlarge

    A way to synchronize models with seismic faults for earthquake forecasting: Insights from a simple stochastic model

    Full text link
    Numerical models are starting to be used for determining the future behaviour of seismic faults and fault networks. Their final goal would be to forecast future large earthquakes. In order to use them for this task, it is necessary to synchronize each model with the current status of the actual fault or fault network it simulates (just as, for example, meteorologists synchronize their models with the atmosphere by incorporating current atmospheric data in them). However, lithospheric dynamics is largely unobservable: important parameters cannot (or can rarely) be measured in Nature. Earthquakes, though, provide indirect but measurable clues of the stress and strain status in the lithosphere, which should be helpful for the synchronization of the models. The rupture area is one of the measurable parameters of earthquakes. Here we explore how it can be used to at least synchronize fault models between themselves and forecast synthetic earthquakes. Our purpose here is to forecast synthetic earthquakes in a simple but stochastic (random) fault model. By imposing the rupture area of the synthetic earthquakes of this model on other models, the latter become partially synchronized with the first one. We use these partially synchronized models to successfully forecast most of the largest earthquakes generated by the first model. This forecasting strategy outperforms others that only take into account the earthquake series. Our results suggest that probably a good way to synchronize more detailed models with real faults is to force them to reproduce the sequence of previous earthquake ruptures on the faults. This hypothesis could be tested in the future with more detailed models and actual seismic data.Comment: Revised version. Recommended for publication in Tectonophysic

    A probabilistic seismic hazard model based on cellular automata and information theory

    No full text
    International audienceWe try to obtain a spatio-temporal model of earthquakes occurrence based on Information Theory and Cellular Automata (CA). The CA supply useful models for many investigations in natural sciences; here, it have been used to establish temporal relations between the seismic events occurring in neighbouring parts of the crust. The catalogue used is divided into time intervals and the region into cells, which are declared active or inactive by means of a certain energy release criterion (four criteria have been tested). A pattern of active and inactive cells which evolves over time is given. A stochastic CA is constructed with the patterns to simulate their spatio-temporal evolution. The interaction between the cells is represented by the neighbourhood (2-D and 3-D models have been tried). The best model is chosen by maximizing the mutual information between the past and the future states. Finally, a Probabilistic Seismic Hazard Map is drawn up for the different energy releases. The method has been applied to the Iberian Peninsula catalogue from 1970 to 2001. For 2-D, the best neighbourhood has been the Moore's one of radius 1; the von Neumann's 3-D also gives hazard maps and takes into account the depth of the events. Gutenberg-Richter's law and Hurst's analysis have been obtained for the data as a test of the catalogue. Our results are consistent with previous studies both of seismic hazard and stress conditions in the zone, and with the seismicity occurred after 2001

    Spatial Heterogeneities in a Simple Earthquake Fault Model

    Get PDF
    Natural earthquake fault systems are composed of a variety of materials with different spatial configurations a complicated, inhomogeneous fault surface. The associated inhomogeneities with their physical properties can result in a variety of spatial and temporal behaviors. As a result, understanding the dynamics of seismic activity in an inhomogeneous environment is fundamental to the investigation of the earthquakes processes. This study presents the results from an inhomogeneous earthquake fault model based on the Olami-Feder-Christensen (OFC) and Rundle-Jackson-Brown (RJB) cellular automata models with long-range interactions that incorporates a fixed percentage of stronger sites, or ‘asperity cells’, into the lattice. These asperity cells are significantly stronger than the surrounding lattice sites but eventually rupture when the applied stress reaches their higher threshold stress. The introduction of these spatial heterogeneities results in temporal clustering in the model that mimics that seen in natural fault systems. Sequences of activity that start with a gradually accelerating number of larger events (foreshocks) prior to a mainshock that is followed by a tail of decreasing activity (aftershocks) are observed for the first time in simple models of this type. These recurrent large events occur at regular intervals, similar to characteristic earthquakes frequently observed in historic seismicity, and the time between events and their magnitude are a function of the stress dissipation parameter. The relative length of the foreshock to aftershock sequences can vary and also depends on the amount of stress dissipation in the system. The magnitude-frequency distribution of events for various amounts of inhomogeneities (asperity sites) in the lattice is investigated in order to provide a better understanding of Gutenberg-Richter (GR) scaling. The spatiotemporal clustering of events in systems with different spatial distribution of asperities and the Thirumalai and Mountain (TM) metric behaviour, an indicator of changes in activity before the main event in the sequence, also are investigated. Accelerating Moment Release (AMR) is observed before the mainshock. The Omori law behaviour for foreshocks and aftershocks is quantified for the model in this study. Finally, a fixed percentage of randomly distributed asperity sites were aggregated into bigger asperity blocks in order to investigate the effect of changing the spatial configuration of stronger sites. The results show that the larger block of asperities generally increases the capability of the fault system to generate larger events, but the total percentage of asperities is important as well. The increasing number of larger events is also associated with an increase in the total number of asperities in the lattice. This work provides further evidence that the spatial and temporal patterns observed in natural seismicity may be controlled by the underlying physical properties and are not solely the result of a simple cascade mechanism and, as a result, may not be inherently unpredictable

    “Space, the Final Frontier”: How Good are Agent-Based Models at Simulating Individuals and Space in Cities?

    Get PDF
    Cities are complex systems, comprising of many interacting parts. How we simulate and understand causality in urban systems is continually evolving. Over the last decade the agent-based modeling (ABM) paradigm has provided a new lens for understanding the effects of interactions of individuals and how through such interactions macro structures emerge, both in the social and physical environment of cities. However, such a paradigm has been hindered due to computational power and a lack of large fine scale datasets. Within the last few years we have witnessed a massive increase in computational processing power and storage, combined with the onset of Big Data. Today geographers find themselves in a data rich era. We now have access to a variety of data sources (e.g., social media, mobile phone data, etc.) that tells us how, and when, individuals are using urban spaces. These data raise several questions: can we effectively use them to understand and model cities as complex entities? How well have ABM approaches lent themselves to simulating the dynamics of urban processes? What has been, or will be, the influence of Big Data on increasing our ability to understand and simulate cities? What is the appropriate level of spatial analysis and time frame to model urban phenomena? Within this paper we discuss these questions using several examples of ABM applied to urban geography to begin a dialogue about the utility of ABM for urban modeling. The arguments that the paper raises are applicable across the wider research environment where researchers are considering using this approach

    Rupture by damage accumulation in rocks

    Get PDF
    The deformation of rocks is associated with microcracks nucleation and propagation, i.e. damage. The accumulation of damage and its spatial localization lead to the creation of a macroscale discontinuity, so-called "fault" in geological terms, and to the failure of the material, i.e. a dramatic decrease of the mechanical properties as strength and modulus. The damage process can be studied both statically by direct observation of thin sections and dynamically by recording acoustic waves emitted by crack propagation (acoustic emission). Here we first review such observations concerning geological objects over scales ranging from the laboratory sample scale (dm) to seismically active faults (km), including cliffs and rock masses (Dm, hm). These observations reveal complex patterns in both space (fractal properties of damage structures as roughness and gouge), time (clustering, particular trends when the failure approaches) and energy domains (power-law distributions of energy release bursts). We use a numerical model based on progressive damage within an elastic interaction framework which allows us to simulate these observations. This study shows that the failure in rocks can be the result of damage accumulation
    • 

    corecore