114 research outputs found
Time-It's time for a change
Since the 1970's, the scientific field of model-based performance and dependability evaluation has been flourishing. Starting with breakthroughs in the area of closed queueing networks in the 1970's, the 1980's brought new results on state-based methods, such as those for stochastic Petri nets and matrix-geometric methods, whereas the 1990's introduced process algebra-type models. Since the turn of the century, techniques for stochastic model checking are being introduced, to name just a few major developments. The applicability of all these techniques has been boosted enormously through Moore's law; these days, stochastic models with tens of millions of states can easily be dealt with on a standard desktop or laptop computer. A dozen or so dedicated conferences serve the scientific field, as well as a number of scientific journals. However, for the field as a whole to make progress, it is important to step back, and to consider how all these as-such important developments have really changed the way computer and communication systems are being designed and operated. The answer to this question is most probable rather disappointing. I do observe a rather strong discrepancy between what is being published in top conferences and journals, and what is being used in real practice. Blaming industry for this would be too easy a way out. Currently, we do not see model-based performance and dependability evaluation as key step in the design process for new computer and communication systems. Moreover, in the exceptional cases that we do see performance and dependability evaluation being part of a design practice, the employed techniques are not the ones referred to above, but instead, depending on the application area, techniques like discrete-event simulation on the basis of hand-crafted simulation programs (communication protocols), or techniques based on (non-stochastic) timed-automata or timeless behavioral models (embedded systems). In all these cases, however, th- e scalability of the employed methods, also for discrete-event simulation, forms a limiting factor. Still, industry is serving the world with ever better, faster and more impressive computing machinery and software! What went wrong? When and why did ?our field? land on a side track? In this presentation I will argue that it is probably time for a change, for a change toward a new way of looking at performance and dependability models and evaluation of computer and communication systems, a way that is, if you like, closer to the way physicists deal with very large scale systems, by applying different type of abstractions. In particular, I will argue that computer scientist should ?stop counting things?. Instead, a more fluid way of thinking about system behavior is deemed to be necessary to be able to evaluate the performance and dependability of the next generation of very large scale omnipresent systems. First successes of such new approaches have recently been reported. Will be witness a paradigm shift in the years to come
Formal Modeling and Analysis of Timed Systems: Technology Push or Market Pull?
In this short paper I will address the question whether the methods and techniques we develop are applied well in industrial practice. To address this question, I will make a few observations from the academic field, as well as from industrial practice. This will be followed by a concise analysis of the cause of the perceived gap between the academic state-of-the-art and industrial practice. I will conclude with some opportunities for improvement
Battery Aging and the Kinetic Battery Model
Batteries are omnipresent, and with the uprise of the electrical vehicles will their use will grow even more. However, the batteries can deliver their required power for a limited time span. They slowly degrade with every charge-discharge cycle. This degradation needs to be taken into account when considering the battery in long lasting applications. Some detailed battery models that describe the degradation exist. However, these are complex models that require detailed knowledge. These models are in general computationally intensive, which does not make them well suited to be used in a wider context. A model better suited for this is the Kinetic Battery Model. In this paper, we this model would change due to battery degradation, by the results of our experimental degradation analysis. In our analysis we see that the degradation takes place in two phases. After the first phase of slow degradation, the battery suddenly starts to degrade rapidly
Product forms for availability
This paper shows and illustrates that product form expressions for the steady state distribution, as known for queueing networks, can also be extended to a class of availability models. This class allows breakdown and repair rates from one component to depend on the status of other components. Common resource capacities and repair priorities, for example, are included. Conditions for the models to have a product form are stated explicitly. This product form is shown to be insensitive to the distributions of the underlying random variables, i.e. to depend only on their means. Further it is briefly indicated how queueing for repair can be incorporated. Novel product form examples are presented of a simple series/parallel configuration, a fault tolerant database system and a multi-stage interconnection network
Fitting World-Wide Web Request Traces with the EM-Algorithm
In recent years, several studies have shown that network traffic exhibits the property of self-similarity. Traditional (Poissonian) modelling approaches have been shown not to be able to describe this property and generally lead to the underestimation of interesting performance measures. Crovella and Bestavros have shown that network traffic that is due to World Wide Web transfers shows characteristics of self-similarity and they argue that this can be explained by the heavy-tailedness of many of the involved distributions. Considering these facts, developing methods which are able to handle self-similarity and heavy-tailedness is of great importance for network capacity planing purposes. In this paper we discuss two methods to fit hyper-exponential distributions to data sets which exhibit heavy-tails. One method is taken from the literature and shown to fall short. The other, new method, is shown to perform well in a number of case studies
- …