14,174 research outputs found
Can geocomputation save urban simulation? Throw some agents into the mixture, simmer and wait ...
There are indications that the current generation of simulation models in practical,
operational uses has reached the limits of its usefulness under existing specifications.
The relative stasis in operational urban modeling contrasts with simulation efforts in
other disciplines, where techniques, theories, and ideas drawn from computation and
complexity studies are revitalizing the ways in which we conceptualize, understand,
and model real-world phenomena. Many of these concepts and methodologies are
applicable to operational urban systems simulation. Indeed, in many cases, ideas from
computation and complexity studies—often clustered under the collective term of
geocomputation, as they apply to geography—are ideally suited to the simulation of
urban dynamics. However, there exist several obstructions to their successful use in
operational urban geographic simulation, particularly as regards the capacity of these
methodologies to handle top-down dynamics in urban systems.
This paper presents a framework for developing a hybrid model for urban geographic
simulation and discusses some of the imposing barriers against innovation in this
field. The framework infuses approaches derived from geocomputation and
complexity with standard techniques that have been tried and tested in operational
land-use and transport simulation. Macro-scale dynamics that operate from the topdown
are handled by traditional land-use and transport models, while micro-scale
dynamics that work from the bottom-up are delegated to agent-based models and
cellular automata. The two methodologies are fused in a modular fashion using a
system of feedback mechanisms. As a proof-of-concept exercise, a micro-model of
residential location has been developed with a view to hybridization. The model
mixes cellular automata and multi-agent approaches and is formulated so as to
interface with meso-models at a higher scale
Formal and Informal Methods for Multi-Core Design Space Exploration
We propose a tool-supported methodology for design-space exploration for
embedded systems. It provides means to define high-level models of applications
and multi-processor architectures and evaluate the performance of different
deployment (mapping, scheduling) strategies while taking uncertainty into
account. We argue that this extension of the scope of formal verification is
important for the viability of the domain.Comment: In Proceedings QAPL 2014, arXiv:1406.156
Setting Parameters for Biological Models With ANIMO
ANIMO (Analysis of Networks with Interactive MOdeling) is a software for
modeling biological networks, such as e.g. signaling, metabolic or gene
networks. An ANIMO model is essentially the sum of a network topology and a
number of interaction parameters. The topology describes the interactions
between biological entities in form of a graph, while the parameters determine
the speed of occurrence of such interactions. When a mismatch is observed
between the behavior of an ANIMO model and experimental data, we want to update
the model so that it explains the new data. In general, the topology of a model
can be expanded with new (known or hypothetical) nodes, and enables it to match
experimental data. However, the unrestrained addition of new parts to a model
causes two problems: models can become too complex too fast, to the point of
being intractable, and too many parts marked as "hypothetical" or "not known"
make a model unrealistic. Even if changing the topology is normally the easier
task, these problems push us to try a better parameter fit as a first step, and
resort to modifying the model topology only as a last resource. In this paper
we show the support added in ANIMO to ease the task of expanding the knowledge
on biological networks, concentrating in particular on the parameter settings
Recommended from our members
A combined neuro fuzzy-cellular automata based material model for finite element simulation of plane strain compression
This paper presents a modelling strategy that combines Neuro-Fuzzy methods to define the material model with Cellular Automata representations of the microstructure, all embedded within a Finite Element solver that can deal with the large deformations of metal processing technology. We use the acronym nf-CAFE as a label for the method. The need for such an approach arises from the twin demands of computational speed for quick solutions for efficient material characterisation by incorporating metallurgical knowledge for material design models and subsequent process control. In this strategy, the cellular automata hold the microstructural features in terms of sub-grain size and dislocation density which are modelled by a neuro-fuzzy system that predicts the flow stress. The proposed methodology is validated on a two dimensional (2D) plane strain compression finite element simulation with Al-1% Mg alloy. Results from the simulations show the potential of
the model for incorporating the effects of the underlying microstructure on the evolving flow stress fields. In doing this, the paper highlights the importance of understanding the local transition rules that affect the global behaviour during deformation
From types to type requirements: Genericity for model-driven engineering
The final publication is available at Springer via http://dx.doi.org/10.1007/s10270-011-0221-0Model-driven engineering (MDE) is a software engineering paradigm that proposes an active use of models during the development process. This paradigm is inherently type-centric, in the sense that models and their manipulation are defined over the types of specific meta-models. This fact hinders the reuse of existing MDE artefacts with other meta-models in new contexts, even if all these meta-models share common characteristics. To increase the reuse opportunities of MDE artefacts, we propose a paradigm shift from type-centric to requirement-centric specifications by bringing genericity into models, meta-models and model management operations. For this purpose, we introduce so-called concepts gathering structural and behavioural requirements for models and meta-models. In this way, model management operations are defined over concepts, enabling the application of the operations to any meta-model satisfying the requirements imposed by the concept. Model templates rely on concepts to define suitable interfaces, hence enabling the definition of reusable model components. Finally, similar to mixin layers, templates can be defined at the meta-model level as well, to define languages in a modular way, as well as layers of functionality to be plugged-in into other meta-models. These ideas have been implemented in MetaDepth, a multi-level meta-modelling tool that integrates action languages from the Epsilon family for model management and code generation.This work has been sponsored by the Spanish Ministry of Science and Innovation with projects METEORIC (TIN2008-02081) and Go Lite (TIN2011-24139), and by the R&D program of the Community of Madrid with project “e-Madrid” (S2009/TIC-1650)
- …