253 research outputs found

    Practical applications of performance modelling of security protocols using PEPA

    Get PDF
    PhD ThesisTrade-off between security and performance has become an intriguing area in recent years in both the security and performance communities. As the security aspects of security protocol research is fully- edged, this thesis is therefore devoted to conducting a performance study of these protocols. The long term objective is to translate formal de nitions of security protocols to formal performance models automatically, then analysing by relevant techniques. In this thesis, we take a preliminary step by studying five typical security protocols, and exploring the methodology of construction and analysis of their models by using the Markovian process algebra PEPA. Through these case studies, an initial framework of performance analysis of security protocol is established. Firstly, a key distribution centre is investigated. The basic model su ers from the commonly encountered state space explosion problem, and so we apply some efficient solution techniques, which include model reduction techniques and ordinary di fferential equation based fluid flow analysis. Finally, we evaluate a utility function for this secure key exchange model. Then, we explore two non-repudiation protocols. Mean value analysis has been applied here for a class of PEPA models, and it is compared with an ODE approximation. After that, an optimistic nonrepudiation protocol with off-line third trust party is studied. The PEPA model has been formulated using a concept of multi-threaded servers with functional rates. The nal case study is a cross-realm Kerberos protocol. A simplified technique of aggregation with an ODE approximation is performed to do efficient cient analysis. All these modelling and analysis methods are illustrated through numerical examples

    Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"

    Get PDF
    According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient. The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself. Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners. • The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another. • The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion. The behaviour of the entities may vary over time. • The systems operate with incomplete information about the environment. For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered. The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems. This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative. We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Practical applications of probabilistic model checking to communication protocols

    Get PDF
    Probabilistic model checking is a formal verification technique for the analysis of systems that exhibit stochastic behaviour. It has been successfully employed in an extremely wide array of application domains including, for example, communication and multimedia protocols, security and power management. In this chapter we focus on the applicability of these techniques to the analysis of communication protocols. An analysis of the performance of such systems must successfully incorporate several crucial aspects, including concurrency between multiple components, real-time constraints and randomisation. Probabilistic model checking, in particular using probabilistic timed automata, is well suited to such an analysis. We provide an overview of this area, with emphasis on an industrially relevant case study: the IEEE 802.3 (CSMA/CD) protocol. We also discuss two contrasting approaches to the implementation of probabilistic model checking, namely those based on numerical computation and those based on discrete-event simulation. Using results from the two tools PRISM and APMC, we summarise the advantages, disadvantages and trade-offs associated with these techniques

    Performance modelling of fairness in IEEE 802.11 wireless LAN protocols

    Get PDF
    PhD ThesisWireless communication has become a key technology in the modern world, allowing network services to be delivered in almost any environment, without the need for potentially expensive and invasive fixed cable solutions. However, the level of performance experienced by wireless devices varies tremendously on location and time. Understanding the factors which can cause variability of service is therefore of clear practical and theoretical interest. In this thesis we explore the performance of the IEEE 802.11 family of wireless protocols, which have become the de facto standard for Wireless Local Area Networks (WLANs). The specific performance issue which is investigated is the unfairness which can arise due to the spatial position of nodes in the network. In this work we characterise unfairness in terms of the difference in performance (e.g. throughput) experienced by different pairs of communicating nodes within a network. Models are presented using the Markovian process algebra PEPA which depict different scenarios with three of the main protocols, IEEE 802.11b, IEEE 802.11g and IEEE 802.11n. The analysis shows that performance is affected by the presence of other nodes (including in the well-known hidden node case), by the speed of data and the size of the frames being transmitted. The collection of models and analysis in this thesis collectively provides not only an insight into fairness in IEEE 802.11 networks, but it also represents a significant use case in modelling network protocols using PEPA. PEPA and other stochastic process algebra are extremely powerful tools for efficiently specifying models which might be very complex to study using conventional simulation approaches. Furthermore the tool support for PEPA facilitates the rapid solution of models to derive key metrics which enable the modeller to gain an understanding of the network behaviour across a wide range of operating conditions. From the results we can see that short frames promote a greater fairness due to the more frequent spaces between frames allowing other senders to transmit. An interesting consequence of these findings is the observation that varying frame length can play a role in addressing topological unfairness, which leads to the analysis of a novel model of IEEE 802.11g with variable frame lengths. While varying frame lengths might not always be practically possible, as frames need to be long enough for collisions to be detected, IEEE 802.11n supports a number of mechanisms for frame aggregation, where successive frames may be sent in series with little or no delay between them. We therefore present a novel model of IEEE 802.11n with frame aggregation to explore how this approach affects fairness and, potentially, can be used to address unfairness by allowing affected nodes to transmit longer frame bursts.Kurdistan Region Government of Iraq (KRG) sponso

    Performance modelling of network management schemes for mobile wireless networks

    Get PDF

    Studying the effects of adding spatiality to a process algebra model

    No full text
    We use NetLogo to create simulations of two models of disease transmission originally expressed in WSCCS. This allows us to introduce spatiality into the models and explore the consequences of having different contact structures among the agents. In previous work, mean field equations were derived from the WSCCS models, giving a description of the aggregate behaviour of the overall population of agents. These results turned out to differ from results obtained by another team using cellular automata models, which differ from process algebra by being inherently spatial. By using NetLogo we are able to explore whether spatiality, and resulting differences in the contact structures in the two kinds of models, are the reason for this different results. Our tentative conclusions, based at this point on informal observations of simulation results, are that space does indeed make a big difference. If space is ignored and individuals are allowed to mix randomly, then the simulations yield results that closely match the mean field equations, and consequently also match the associated global transmission terms (explained below). At the opposite extreme, if individuals can only contact their immediate neighbours, the simulation results are very different from the mean field equations (and also do not match the global transmission terms). These results are not surprising, and are consistent with other cellular automata-based approaches. We found that it was easy and convenient to implement and simulate the WSCCS models within NetLogo, and we recommend this approach to anyone wishing to explore the effects of introducing spatiality into a process algebra model

    Stochastic Simulation Methods Applied to a Secure Electronic Voting Model

    Get PDF
    We demonstrate a novel simulation technique for analysing large stochastic process algebra models, applying this to a secure electronic voting system example. By approximating the discrete state space of a PEPA model by a continuous equivalent, we can draw on rate equation simulation techniques from both chemical and biological modelling to avoid having to directly enumerate the huge state spaces involved. We use stochastic simulation techniques to provide traces of course-of-values time series representing the number of components in a particular state. Using such a technique we can get simulation results for models exceeding 10 10000 states within only a few seconds
    • …
    corecore