120 research outputs found
Analysis of Petri Net Models through Stochastic Differential Equations
It is well known, mainly because of the work of Kurtz, that density dependent
Markov chains can be approximated by sets of ordinary differential equations
(ODEs) when their indexing parameter grows very large. This approximation
cannot capture the stochastic nature of the process and, consequently, it can
provide an erroneous view of the behavior of the Markov chain if the indexing
parameter is not sufficiently high. Important phenomena that cannot be revealed
include non-negligible variance and bi-modal population distributions. A
less-known approximation proposed by Kurtz applies stochastic differential
equations (SDEs) and provides information about the stochastic nature of the
process. In this paper we apply and extend this diffusion approximation to
study stochastic Petri nets. We identify a class of nets whose underlying
stochastic process is a density dependent Markov chain whose indexing parameter
is a multiplicative constant which identifies the population level expressed by
the initial marking and we provide means to automatically construct the
associated set of SDEs. Since the diffusion approximation of Kurtz considers
the process only up to the time when it first exits an open interval, we extend
the approximation by a machinery that mimics the behavior of the Markov chain
at the boundary and allows thus to apply the approach to a wider set of
problems. The resulting process is of the jump-diffusion type. We illustrate by
examples that the jump-diffusion approximation which extends to bounded domains
can be much more informative than that based on ODEs as it can provide accurate
quantity distributions even when they are multi-modal and even for relatively
small population levels. Moreover, we show that the method is faster than
simulating the original Markov chain
Graph Transformation for Domain-Specific Discrete Event Time Simulation
Proceedings of: Fifth International Conference on Graph Transformation (ICGT 2010). Enschede, The Netherlands, September 27âOctober 2, 2010.Graph transformation is being increasingly used to express the semantics of domain specific visual languages since its graphical nature makes rules intuitive. However, many application domains require an explicit handling of time in order to represent accurately the behaviour of the real system and to obtain useful simulation metrics. Inspired by the vast knowledge and experience accumulated by the discrete event simulation community, we propose a novel way of adding explicit time to graph transformation rules. In particular, we take the event scheduling discrete simulation world view and incorporate to the rules the ability of scheduling the occurrence of other rules in the future. Hence, our work combines standard, efficient techniques for discrete event simulation (based on the handling of a future event set) and the intuitive, visual nature of graph transformation. Moreover, we show how our formalism can be used to give semantics to other timed approaches.Work partially sponsored by the Spanish Ministry of Science and Innovation, under project âMETEORICâ (TIN2008-02081) and mobility grants JC2009-00015 and PR2009-0019, as well as by the R&D programme of the Community of Madrid, project âe-Madridâ (S2009/TIC-1650).Publicad
A Randomized Real-Valued Negative Selection Algorithm
This paper presents a real-valued negative selection algorithm with good mathematical foundation that solves some of the drawbacks of our previous approach [11]. Specifically, it can produce a good estimate of the optimal number of detectors needed to cover the non-self space, and the maximization of the non-self coverage is done through an optimization algorithm with proven convergence properties. The proposed method is a randomized algorithm based on Monte Carlo methods. Experiments are performed to validate the assumptions made while designing the algorithm and to evaluate its performance. © Springer-Verlag Berlin Heidelberg 2003
Projective re-normalization for improving the behavior of a homogeneous conic linear system
In this paper we study the homogeneous conic system F : Ax = 0, x â C \ {0}. We choose a point ÂŻs â intCâ that serves as a normalizer and consider computational properties of the normalized system FÂŻs : Ax = 0, ÂŻsT x = 1, x â C. We show that the computational complexity of solving F via an interior-point method depends
only on the complexity value Ï of the barrier for C and on the symmetry of the origin in the image set HÂŻs := {Ax :
ÂŻsT x = 1, x â C}, where the symmetry of 0 in HÂŻs is sym(0,HÂŻs) := max{α : y â HÂŻs -->âαy â HÂŻs} .We show that a solution of F can be computed in O(sqrtÏ ln(Ï/sym(0,HÂŻs)) interior-point iterations. In order to improve the theoretical and practical computation of a solution of F, we next present a general theory for projective re-normalization of the feasible region FÂŻs and the image set HÂŻs and prove the existence of a normalizer ÂŻs such that sym(0,HÂŻs) â„ 1/m provided that F has an interior solution. We develop a methodology for constructing a normalizer ÂŻs such that sym(0,HÂŻs) â„ 1/m with high probability, based on sampling on a geometric random walk with associated probabilistic complexity analysis. While such a normalizer is not itself computable in strongly-polynomialtime,
the normalizer will yield a conic system that is solvable in O(sqrtÏ ln(mÏ)) iterations, which is strongly-polynomialtime.
Finally, we implement this methodology on randomly generated homogeneous linear programming feasibility
problems, constructed to be poorly behaved. Our computational results indicate that the projective re-normalization
methodology holds the promise to markedly reduce the overall computation time for conic feasibility problems; for
instance we observe a 46% decrease in average IPM iterations for 100 randomly generated poorly-behaved problem
instances of dimension 1000 Ă 5000.Singapore-MIT Allianc
How heavy-tailed distributions affect simulation-generated time averages
For statistical inference based on telecommunications network simulation, we examine the effect of a heavy-tailed file-size distribution whose corresponding density follows an inverse power law with exponent a + 1, where the shape parameter a is strictly between 1 and 2. Representing the session-initiation and file-transmission processes as an infinite-server queueing system with Poisson arrivals, we derive the transient conditional mean and covariance function that describes the number of active sessions as well as the steady-state counterparts of these moments. Assuming the file size (service time) for each session follows the Lomax distribution, we show that the variance of the sample mean for the time-averaged number of active sessions tends to zero as the power of 1 - a of the simulation run length. Therefore, impractically large sample-path lengths are required to achieve point estimators with acceptable levels of statistical accuracy. This study compares the accuracy of point estimators based on the Lomax distribution with those for lognormal and Weibull file-size distributions whose parameters are determined by matching their means and a selected extreme quantile with those of the Lomax. Both alternatives require shorter run lengths than the Lomax to achieve a given level of accuracy. Although the lognormal requires longer sample paths than the Weibull, it better approximates the Lomax and leads to practicable run lengths in almost all scenarios
- âŠ