5,719 research outputs found
Efficient Parallel Statistical Model Checking of Biochemical Networks
We consider the problem of verifying stochastic models of biochemical
networks against behavioral properties expressed in temporal logic terms. Exact
probabilistic verification approaches such as, for example, CSL/PCTL model
checking, are undermined by a huge computational demand which rule them out for
most real case studies. Less demanding approaches, such as statistical model
checking, estimate the likelihood that a property is satisfied by sampling
executions out of the stochastic model. We propose a methodology for
efficiently estimating the likelihood that a LTL property P holds of a
stochastic model of a biochemical network. As with other statistical
verification techniques, the methodology we propose uses a stochastic
simulation algorithm for generating execution samples, however there are three
key aspects that improve the efficiency: first, the sample generation is driven
by on-the-fly verification of P which results in optimal overall simulation
time. Second, the confidence interval estimation for the probability of P to
hold is based on an efficient variant of the Wilson method which ensures a
faster convergence. Third, the whole methodology is designed according to a
parallel fashion and a prototype software tool has been implemented that
performs the sampling/verification process in parallel over an HPC
architecture
Category Theory and Model-Driven Engineering: From Formal Semantics to Design Patterns and Beyond
There is a hidden intrigue in the title. CT is one of the most abstract
mathematical disciplines, sometimes nicknamed "abstract nonsense". MDE is a
recent trend in software development, industrially supported by standards,
tools, and the status of a new "silver bullet". Surprisingly, categorical
patterns turn out to be directly applicable to mathematical modeling of
structures appearing in everyday MDE practice. Model merging, transformation,
synchronization, and other important model management scenarios can be seen as
executions of categorical specifications.
Moreover, the paper aims to elucidate a claim that relationships between CT
and MDE are more complex and richer than is normally assumed for "applied
mathematics". CT provides a toolbox of design patterns and structural
principles of real practical value for MDE. We will present examples of how an
elementary categorical arrangement of a model management scenario reveals
deficiencies in the architecture of modern tools automating the scenario.Comment: In Proceedings ACCAT 2012, arXiv:1208.430
Complexity-entropy analysis at different levels of organization in written language
Written language is complex. A written text can be considered an attempt to
convey a meaningful message which ends up being constrained by language rules,
context dependence and highly redundant in its use of resources. Despite all
these constraints, unpredictability is an essential element of natural
language. Here we present the use of entropic measures to assert the balance
between predictability and surprise in written text. In short, it is possible
to measure innovation and context preservation in a document. It is shown that
this can also be done at the different levels of organization of a text. The
type of analysis presented is reasonably general, and can also be used to
analyze the same balance in other complex messages such as DNA, where a
hierarchy of organizational levels are known to exist
Clusters of firms in space and time
The use of the K-functions (Ripley, 1977) has become recently popular in the analysis of the spatial pattern of firms. It was first introduced in the economic literature by Arbia and Espa (1996) and then popularized by Marcon and Puech (2003), Quah and Simpson (2003), Duranton and Overman (2005) and Arbia et al. (2008). In particular in Arbia et al. (2008) we used Ripleyâs K-functions as instruments to study the inter-sectoral co-agglomeration pattern of firms in a single moment of time. All this researches have followed a static approach, disregarding the time dimension. Temporal dynamics, on the other hand, play a crucial role in understanding the economic and social phenomena, particularly when referring to the analysis of the individual choices leading to the observed clusters of economic activities. With respect to the contributions previously appeared in the literature, this paper uncovers the process of firm demography by studying the dynamics of localization through space-time K-functions. The empirical part of the paper will focus on the study of the long run localization of firms in the area of Rome (Italy), by concentrating on the ICT sector data collected by the Italian Industrial Union in the period 1920- 2005.Agglomeration, Non-parametric measures; Space-time K-functions, Spatial clusters, Spatial econometrics.
Towards Distributed Petascale Computing
In this chapter we will argue that studying such multi-scale multi-science
systems gives rise to inherently hybrid models containing many different
algorithms best serviced by different types of computing environments (ranging
from massively parallel computers, via large-scale special purpose machines to
clusters of PC's) whose total integrated computing capacity can easily reach
the PFlop/s scale. Such hybrid models, in combination with the by now
inherently distributed nature of the data on which the models `feed' suggest a
distributed computing model, where parts of the multi-scale multi-science model
are executed on the most suitable computing environment, and/or where the
computations are carried out close to the required data (i.e. bring the
computations to the data instead of the other way around). We presents an
estimate for the compute requirements to simulate the Galaxy as a typical
example of a multi-scale multi-physics application, requiring distributed
Petaflop/s computational power.Comment: To appear in D. Bader (Ed.) Petascale, Computing: Algorithms and
Applications, Chapman & Hall / CRC Press, Taylor and Francis Grou
- âŠ