320,895 research outputs found

    Property-preserving synthesis for unified conrol- and data-oriented models.

    Get PDF
    In the software/hardware engineering model-driven design methodology, preservation of real-time system properties can be guaranteed in the model synthesis up to a small time-deviation. Therefore, this methodology is well suited for the design and implementation of control systems in which execution times of actions are small; thus the time-deviations obtained are small. However, in systems containing time-intensive computations, the time-deviations become large and, consequently, the real-time properties are much weakened. This chapter proposes an approach for obtaining stronger preservation of the observable properties of the system by abstracting from its internal unobservable actions. In this way, a unified way of analysis and synthesis of a larger area of real-time applications can be obtained, which enables designers to achieve predictability in the design of many systems

    When are Stochastic Transition Systems Tameable?

    Full text link
    A decade ago, Abdulla, Ben Henda and Mayr introduced the elegant concept of decisiveness for denumerable Markov chains [1]. Roughly speaking, decisiveness allows one to lift most good properties from finite Markov chains to denumerable ones, and therefore to adapt existing verification algorithms to infinite-state models. Decisive Markov chains however do not encompass stochastic real-time systems, and general stochastic transition systems (STSs for short) are needed. In this article, we provide a framework to perform both the qualitative and the quantitative analysis of STSs. First, we define various notions of decisiveness (inherited from [1]), notions of fairness and of attractors for STSs, and make explicit the relationships between them. Then, we define a notion of abstraction, together with natural concepts of soundness and completeness, and we give general transfer properties, which will be central to several verification algorithms on STSs. We further design a generic construction which will be useful for the analysis of {\omega}-regular properties, when a finite attractor exists, either in the system (if it is denumerable), or in a sound denumerable abstraction of the system. We next provide algorithms for qualitative model-checking, and generic approximation procedures for quantitative model-checking. Finally, we instantiate our framework with stochastic timed automata (STA), generalized semi-Markov processes (GSMPs) and stochastic time Petri nets (STPNs), three models combining dense-time and probabilities. This allows us to derive decidability and approximability results for the verification of these models. Some of these results were known from the literature, but our generic approach permits to view them in a unified framework, and to obtain them with less effort. We also derive interesting new approximability results for STA, GSMPs and STPNs.Comment: 77 page

    Optimisation and characterisation of human corneal stromal models

    Get PDF
    The native corneal structure is highly organised and unified in architecture with structural and functional integration which mediates its transparency and mechanical strength. Two of the most demanding challenges in corneal tissue engineering are the replication of the native corneal stromal architecture and the preservation of stromal cell phenotype which prevents scar-like tissue formation. A concerted effort in this thesis has been devoted to the generation of a functional human corneal stromal model by the manipulation of chemical, topographical and cellular cues. To achieve this, previously built non-destructive, online, real-time monitoring techniques, micro-indentation and optical coherence tomography (OCT), which allow for the mechanical and contraction properties of corneal equivalents to be monitored, have been improved. These macroscopic parameters have been cross-validated by histological, imunohistochemical, morphological and genetic expression analysis

    Pragmatic approach to the development of robust real-time protocols

    Get PDF
    This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner

    Towards Applying Cryptographic Security Models to Real-World Systems

    Get PDF
    The cryptographic methodology of formal security analysis usually works in three steps: choosing a security model, describing a system and its intended security properties, and creating a formal proof of security. For basic cryptographic primitives and simple protocols this is a well understood process and is performed regularly. For more complex systems, as they are in use in real-world settings it is rarely applied, however. In practice, this often leads to missing or incomplete descriptions of the security properties and requirements of such systems, which in turn can lead to insecure implementations and consequent security breaches. One of the main reasons for the lack of application of formal models in practice is that they are particularly difficult to use and to adapt to new use cases. With this work, we therefore aim to investigate how cryptographic security models can be used to argue about the security of real-world systems. To this end, we perform case studies of three important types of real-world systems: data outsourcing, computer networks and electronic payment. First, we give a unified framework to express and analyze the security of data outsourcing schemes. Within this framework, we define three privacy objectives: \emph{data privacy}, \emph{query privacy}, and \emph{result privacy}. We show that data privacy and query privacy are independent concepts, while result privacy is consequential to them. We then extend our framework to allow the modeling of \emph{integrity} for the specific use case of file systems. To validate our model, we show that existing security notions can be expressed within our framework and we prove the security of CryFS---a cryptographic cloud file system. Second, we introduce a model, based on the Universal Composability (UC) framework, in which computer networks and their security properties can be described We extend it to incorporate time, which cannot be expressed in the basic UC framework, and give formal tools to facilitate its application. For validation, we use this model to argue about the security of architectures of multiple firewalls in the presence of an active adversary. We show that a parallel composition of firewalls exhibits strictly better security properties than other variants. Finally, we introduce a formal model for the security of electronic payment protocols within the UC framework. Using this model, we prove a set of necessary requirements for secure electronic payment. Based on these findings, we discuss the security of current payment protocols and find that most are insecure. We then give a simple payment protocol inspired by chipTAN and photoTAN and prove its security within our model. We conclude that cryptographic security models can indeed be used to describe the security of real-world systems. They are, however, difficult to apply and always need to be adapted to the specific use case

    Statistical signal processing of nonstationary tensor-valued data

    Get PDF
    Real-world signals, such as the evolution of three-dimensional vector fields over time, can exhibit highly structured probabilistic interactions across their multiple constitutive dimensions. This calls for analysis tools capable of directly capturing the inherent multi-way couplings present in such data. Yet, current analyses typically employ multivariate matrix models and their associated linear algebras which are agnostic to the global data structure and can only describe local linear pairwise relationships between data entries. To address this issue, this thesis uses the property of linear separability -- a notion intrinsic to multi-dimensional data structures called tensors -- as a linchpin to consider the probabilistic, statistical and spectral separability under one umbrella. This helps to both enhance physical meaning in the analysis and reduce the dimensionality of tensor-valued problems. We first introduce a new identifiable probability distribution which appropriately models the interactions between random tensors, whereby linear relationships are considered between tensor fibres as opposed to between individual entries as in standard matrix analysis. Unlike existing models, the proposed tensor probability distribution formulation is shown to yield a unique maximum likelihood estimator which is demonstrated to be statistically efficient. Both matrices and vectors are lower-order tensors, and this gives us a unique opportunity to consider some matrix signal processing models under the more powerful framework of multilinear tensor algebra. By introducing a model for the joint distribution of multiple random tensors, it is also possible to treat random tensor regression analyses and subspace methods within a unified separability framework. Practical utility of the proposed analysis is demonstrated through case studies over synthetic and real-world tensor-valued data, including the evolution over time of global atmospheric temperatures and international interest rates. Another overarching theme in this thesis is the nonstationarity inherent to real-world signals, which typically consist of both deterministic and stochastic components. This thesis aims to help bridge the gap between formal probabilistic theory of stochastic processes and empirical signal processing methods for deterministic signals by providing a spectral model for a class of nonstationary signals, whereby the deterministic and stochastic time-domain signal properties are designated respectively by the first- and second-order moments of the signal in the frequency domain. By virtue of the assumed probabilistic model, novel tests for nonstationarity detection are devised and demonstrated to be effective in low-SNR environments. The proposed spectral analysis framework, which is intrinsically complex-valued, is facilitated by augmented complex algebra in order to fully capture the joint distribution of the real and imaginary parts of complex random variables, using a compact formulation. Finally, motivated by the need for signal processing algorithms which naturally cater for the nonstationarity inherent to real-world tensors, the above contributions are employed simultaneously to derive a general statistical signal processing framework for nonstationary tensors. This is achieved by introducing a new augmented complex multilinear algebra which allows for a concise description of the multilinear interactions between the real and imaginary parts of complex tensors. These contributions are further supported by new physically meaningful empirical results on the statistical analysis of nonstationary global atmospheric temperatures.Open Acces

    Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations

    Get PDF
    Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate. The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand. Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades. We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed. Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year. Finally, the application of these results in the context of realistic climate models is discussed

    Towards a Unified Theory of Timed Automata

    Get PDF
    Timed automata are finite-state machines augmented with special clock variables that reflect the advancement of time. Able to both capture real-time behavior and be verified algorithmically (model-checked), timed automata are used to model real-time systems. These observations have led to the development of several timed-automata verification tools that have been successfully applied to the analysis of a number of different systems; however, the practical utility of timed automata is undermined by the theories underlying different tools differing in subtle but important ways. Since algorithmic results that hold for the variant used by one tool may not apply to another variant, this complicates the application of different tools to different models. The thesis of this dissertation is this: the theory of timed automata can be unified, and a practical unified approach to timed-automata model checking can be built around the paradigm of proof search. First, this dissertation establishes the mutual expressivity of timed automata variants, thereby providing precise characterizations of when theoretical results of one variant apply to other variants. Second, it proves powerful expressive properties about different logics for timed behavior, and as a result, enlarges the set of verifiable properties. Third, it discusses an implementation of a verification tool for an expressive fixpoint-based logic, demonstrating an application of this newly developed theory. The tool is based on a proof-search paradigm; verifying timed automata involves constructing proofs using proof rules that enable verification problems to be translated into subproblems that must be solved. The tool's performance is optimized by using derived proof rules, thereby providing a theoretically sound basis for faster model checking. Last, this dissertation utilizes the proofs generated during verification to gain additional information about the vacuous satisfaction of certain formulae: whether the automaton satisfied a formula by never satisfying certain premises of that specification. This extra information is often obtained without significantly decreasing the verifier's performance
    corecore