19 research outputs found

    Security properties of self-similar uniformly parameterised systems of cooperations

    Get PDF
    Abstract-Uniform parameterisations of cooperations are defined in terms of formal language theory, such that each pair of partners cooperates in the same manner, and that the mechanism (schedule) to determine how one partner may be involved in several cooperations, is the same for each partner. Generalising each pair of partners cooperating in the same manner, for such systems of cooperations a kind of selfsimilarity is formalised. From an abstracting point of view, where only actions of some selected partners are considered, the complex system of all partners behaves like the smaller subsystem of the selected partners. For verification purposes, so called uniformly parameterised safety properties are defined. Such properties can be used to express privacy policies as well as security and dependability requirements. It is shown, how the parameterised problem of verifying such a property is reduced by self-similarity to a finite state problem. Keywords-cooperations as prefix closed languages; abstractions of system behaviour; self-similarity in systems of cooperations; privacy policies; uniformly parameterised safety properties

    Security Analysis of System Behaviour - From "Security by Design" to "Security at Runtime" -

    Get PDF
    The Internet today provides the environment for novel applications and processes which may evolve way beyond pre-planned scope and purpose. Security analysis is growing in complexity with the increase in functionality, connectivity, and dynamics of current electronic business processes. Technical processes within critical infrastructures also have to cope with these developments. To tackle the complexity of the security analysis, the application of models is becoming standard practice. However, model-based support for security analysis is not only needed in pre-operational phases but also during process execution, in order to provide situational security awareness at runtime. This cumulative thesis provides three major contributions to modelling methodology. Firstly, this thesis provides an approach for model-based analysis and verification of security and safety properties in order to support fault prevention and fault removal in system design or redesign. Furthermore, some construction principles for the design of well-behaved scalable systems are given. The second topic is the analysis of the exposition of vulnerabilities in the software components of networked systems to exploitation by internal or external threats. This kind of fault forecasting allows the security assessment of alternative system configurations and security policies. Validation and deployment of security policies that minimise the attack surface can now improve fault tolerance and mitigate the impact of successful attacks. Thirdly, the approach is extended to runtime applicability. An observing system monitors an event stream from the observed system with the aim to detect faults - deviations from the specified behaviour or security compliance violations - at runtime. Furthermore, knowledge about the expected behaviour given by an operational model is used to predict faults in the near future. Building on this, a holistic security management strategy is proposed. The architecture of the observing system is described and the applicability of model-based security analysis at runtime is demonstrated utilising processes from several industrial scenarios. The results of this cumulative thesis are provided by 19 selected peer-reviewed papers

    Pairs of Languages Closed under Shuffle Projection

    Full text link
    Shuffle projection is motivated by the verification of safety properties of special parameterized systems. Basic definitions and properties, especially related to alphabetic homomorphisms, are presented. The relation between iterated shuffle products and shuffle projections is shown. A special class of multi-counter automata is introduced, to formulate shuffle projection in terms of computations of these automata represented by transductions. This reformulation of shuffle projection leads to construction principles for pairs of languages closed under shuffle projection. Additionally, it is shown that under certain conditions these transductions are rational, which implies decidability of closure against shuffle projection. Decidability of these conditions is proven for regular languages. Finally, without additional conditions, decidability of the question, whether a pair of regular languages is closed under shuffle projection, is shown. In an appendix the relation between shuffle projection and the shuffle product of two languages is discussed. Additionally, a kind of shuffle product for computations in S-automata is defined

    Flows of virtual land and water through global trade of agricultural products

    No full text

    Modelling users in networks with path choice: four studies in telecommunications and transit

    Get PDF
    Networks of interacting users arise in many important modelling applications. Commuters interact with each other and form traffic jams during peak-time. Network protocols are users in a communication network that control sending rate and server choice. When protocols send with too high rates, network links get overloaded resulting in lost data and high delays. Although these two example users seem very different, they are similar on a conceptual modelling level. Accurate user models are essential to study complex interactions in networks. The behaviour of a user with access to different paths in a network can be modelled as an optimisation problem. Users who choose paths with the highest utility are common in many different application areas, for example road traffic, Internet protocol modelling, and general societal networks, i.e. networks of humans in everyday life. Optimisation-based user models are also attractive from the perspective of a modeller since they often allow the derivation of insights about the behaviour of the entire system by only describing a user model. The aim of this thesis is to show, in four practical studies from telecommunications and transit networks, where optimisation-based models have limitations when modelling users with path choice. We study users who have access to a limited number of paths in large scale data centers and investigate how many paths per user are realistically needed in order to get high throughput in the network. In multimedia streaming, we study a protocol that streams data on multiple paths and path properties matter. We also investigate complex energy models for data interfaces on mobile phones and evaluate how to switch interfaces to save energy. Finally, we analyse a long-term data set from 20,000 transit commuters and give insights on how they change their travel behaviour in response to incentives and targeted offers. We use tools from optimisation, simulation, and statistics to evaluate the four studies and point out problems we faced when modelling and implementing the system. The findings of this thesis indicate where user models need to be extended in order to be of practical use. The results can serve as a guide towards better user models for future modelling applications

    Application of policy-based techniques to process-oriented IT Service Management

    Get PDF

    Formal Methods for Probabilistic Energy Models

    Get PDF
    The energy consumption that arises from the utilisation of information processing systems adds a significant contribution to environmental pollution and has a big share of operation costs. This entails that we need to find ways to reduce the energy consumption of such systems. When trying to save energy it is important to ensure that the utility (e.g., user experience) of a system is not unnecessarily degraded, requiring a careful trade-off analysis between the consumed energy and the resulting utility. Therefore, research on energy efficiency has become a very active and important research topic that concerns many different scientific areas, and is as well of interest for industrial companies. The concept of quantiles is already well-known in mathematical statistics, but its benefits for the formal quantitative analysis of probabilistic systems have been noticed only recently. For instance, with the help of quantiles it is possible to reason about the minimal energy that is required to obtain a desired system behaviour in a satisfactory manner, e.g., a required user experience will be achieved with a sufficient probability. Quantiles also allow the determination of the maximal utility that can be achieved with a reasonable probability while staying within a given energy budget. As those examples illustrate important measures that are of interest when analysing energy-aware systems, it is clear that it is beneficial to extend formal analysis-methods with possibilities for the calculation of quantiles. In this monograph, we will see how we can take advantage of those quantiles as an instrument for analysing the trade-off between energy and utility in the field of probabilistic model checking. Therefore, we present algorithms for their computation over Markovian models. We will further investigate different techniques in order to improve the computational performance of implementations of those algorithms. The main feature that enables those improvements takes advantage of the specific characteristics of the linear programs that need to be solved for the computation of quantiles. Those improved algorithms have been implemented and integrated into the well-known probabilistic model checker PRISM. The performance of this implementation is then demonstrated by means of different protocols with an emphasis on the trade-off between the consumed energy and the resulting utility. Since the introduced methods are not restricted to the case of an energy-utility analysis only, the proposed framework can be used for analysing the interplay of cost and its resulting benefit in general.:1 Introduction 1.1 Related work 1.2 Contribution and outline 2 Preliminaries 3 Reward-bounded reachability properties and quantiles 3.1 Essentials 3.2 Dualities 3.3 Upper-reward bounded quantiles 3.3.1 Precomputation 3.3.2 Computation scheme 3.3.3 Qualitative quantiles 3.4 Lower-reward bounded quantiles 3.4.1 Precomputation 3.4.2 Computation scheme 3.5 Energy-utility quantiles 3.6 Quantiles under side conditions 3.6.1 Upper reward bounds 3.6.2 Lower reward bounds 3.6.2.1 Maximal reachability probabilities 3.6.2.2 Minimal reachability probabilities 3.7 Reachability quantiles and continuous time 3.7.1 Dualities 4 Expectation Quantiles 4.1 Computation scheme 4.2 Arbitrary models 4.2.1 Existential expectation quantiles 4.2.2 Universal expectation quantiles 5 Implementation 5.1 Computation optimisations 5.1.1 Back propagation 5.1.2 Reward window 5.1.3 Topological sorting of zero-reward sub-MDPs 5.1.4 Parallel computations 5.1.5 Multi-thresholds 5.1.6 Multi-state solution methods 5.1.7 Storage for integer sets 5.1.8 Elimination of zero-reward self-loops 5.2 Integration in Prism 5.2.1 Computation of reward-bounded reachability probabilities 5.2.2 Computation of quantiles in CTMCs 6 Analysed Protocols 6.1 Prism Benchmark Suite 6.1.1 Self-Stabilising Protocol 6.1.2 Leader-Election Protocol 6.1.3 Randomised Consensus Shared Coin Protocol 6.2 Energy-Aware Protocols 6.2.1 Energy-Aware Job-Scheduling Protocol 6.2.1.1 Energy-Aware Job-Scheduling Protocol with side conditions 6.2.1.2 Energy-Aware Job-Scheduling Protocol and expectation quantiles 6.2.1.3 Multiple shared resources 6.2.2 Energy-Aware Bonding Network Device (eBond) 6.2.3 HAECubie Demonstrator 6.2.3.1 Operational behaviour of the protocol 6.2.3.2 Formal analysis 7 Conclusion 7.1 Classification 7.2 Future prospects Bibliography List of Figures List of Table

    A Data Protection Architecture for Derived Data Control in Partially Disconnected Networks

    No full text
    Every organisation needs to exchange and disseminate data constantly amongst its employees, members, customers and partners. Disseminated data is often sensitive or confidential and access to it should be restricted to authorised recipients. Several enterprise rights management (ERM) systems and data protection solutions have been proposed by both academia and industry to enable usage control on disseminated data, i.e. to allow data originators to retain control over whom accesses their information, under which circumstances, and how it is used. This is often obtained by means of cryptographic techniques and thus by disseminating encrypted data that only trustworthy recipients can decrypt. Most of these solutions assume data recipients are connected to the network and able to contact remote policy evaluation authorities that can evaluate usage control policies and issue decryption keys. This assumption oversimplifies the problem by neglecting situations where connectivity is not available, as often happens in crisis management scenarios. In such situations, recipients may not be able to access the information they have received. Also, while using data, recipients and their applications can create new derived information, either by aggregating data from several sources or transforming the original data’s content or format. Existing solutions mostly neglect this problem and do not allow originators to retain control over this derived data despite the fact that it may be more sensitive or valuable than the data originally disseminated. In this thesis we propose an ERM architecture that caters for both derived data control and usage control in partially disconnected networks. We propose the use of a novel policy lattice model based on information flow and mandatory access control. Sets of policies controlling the usage of data can be specified and ordered in a lattice according to the level of protection they provide. At the same time, their association with specific data objects is mandated by rules (content verification procedures) defined in a data sharing agreement (DSA) stipulated amongst the organisations sharing information. When data is transformed, the new policies associated with it are automatically determined depending on the transformation used and the policies currently associated with the input data. The solution we propose takes into account transformations that can both increase or reduce the sensitivity of information, thus giving originators a flexible means to control their data and its derivations. When data must be disseminated in disconnected environments, the movement of users and the ad hoc connections they establish can be exploited to distribute information. To allow users to decrypt disseminated data without contacting remote evaluation authorities, we integrate our architecture with a mechanism for authority devolution, so that users moving in the disconnected area can be granted the right to evaluate policies and issue decryption keys. This allows recipients to contact any nearby user that is also a policy evaluation authority to obtain decryption keys. The mechanism has been shown to be efficient so that timely access to data is possible despite the lack of connectivity. Prototypes of the proposed solutions that protect XML documents have been developed. A realistic crisis management scenario has been used to show both the flexibility of the presented approach for derived data control and the efficiency of the authority devolution solution when handling data dissemination in simulated partially disconnected networks. While existing systems do not offer any means to control derived data and only offer partial solutions to the problem of lack of connectivity (e.g. by caching decryption keys), we have defined a set of solutions that help data originators faced with the shortcomings of current proposals to control their data in innovative, problem-oriented ways
    corecore