39,565 research outputs found

    Methods for checking and enforcing physical quality of linear electrical network models

    Get PDF
    Most CAD tools allow system-level simulation for signal integrity by computing and connecting models together for the various sub-parts. The success of this model derivation depends on the quality of the network parameters. Different errors may seriously affect the quality of the frequency characterization: frequency-dependent measurement errors, errors due to the numerical simulation and/or discretization, etc. When these errors are large, model assembly and simulation becomes difficult and may even fail. This thesis gives an overview of the most significant properties of physically valid network parameters, describes existing methods for checking and enforcing these properties, and presents several new methodologies for checking and enforcing causality. A time domain methodology based on the vector fitting approximation as well as the frequency domain methodologies based on the Kramers-Kronig relations enforcement by numerical integration and Fast Fourier Transform are presented. A new algorithm is developed for a stable recursive convolution after time domain causality enforcement. In addition, global qualities of data for system simulations are discussed: a study of an accurate causal frequency domain interpolation as well as a robust technique for extrapolation to DC is included --Abstract, page iii

    Counterfactual Causality from First Principles?

    Full text link
    In this position paper we discuss three main shortcomings of existing approaches to counterfactual causality from the computer science perspective, and sketch lines of work to try and overcome these issues: (1) causality definitions should be driven by a set of precisely specified requirements rather than specific examples; (2) causality frameworks should support system dynamics; (3) causality analysis should have a well-understood behavior in presence of abstraction.Comment: In Proceedings CREST 2017, arXiv:1710.0277

    Causality and Temporal Dependencies in the Design of Fault Management Systems

    Get PDF
    Reasoning about causes and effects naturally arises in the engineering of safety-critical systems. A classical example is Fault Tree Analysis, a deductive technique used for system safety assessment, whereby an undesired state is reduced to the set of its immediate causes. The design of fault management systems also requires reasoning on causality relationships. In particular, a fail-operational system needs to ensure timely detection and identification of faults, i.e. recognize the occurrence of run-time faults through their observable effects on the system. Even more complex scenarios arise when multiple faults are involved and may interact in subtle ways. In this work, we propose a formal approach to fault management for complex systems. We first introduce the notions of fault tree and minimal cut sets. We then present a formal framework for the specification and analysis of diagnosability, and for the design of fault detection and identification (FDI) components. Finally, we review recent advances in fault propagation analysis, based on the Timed Failure Propagation Graphs (TFPG) formalism.Comment: In Proceedings CREST 2017, arXiv:1710.0277

    Trend-based analysis of a population model of the AKAP scaffold protein

    Get PDF
    We formalise a continuous-time Markov chain with multi-dimensional discrete state space model of the AKAP scaffold protein as a crosstalk mediator between two biochemical signalling pathways. The analysis by temporal properties of the AKAP model requires reasoning about whether the counts of individuals of the same type (species) are increasing or decreasing. For this purpose we propose the concept of stochastic trends based on formulating the probabilities of transitions that increase (resp. decrease) the counts of individuals of the same type, and express these probabilities as formulae such that the state space of the model is not altered. We define a number of stochastic trend formulae (e.g. weakly increasing, strictly increasing, weakly decreasing, etc.) and use them to extend the set of state formulae of Continuous Stochastic Logic. We show how stochastic trends can be implemented in a guarded-command style specification language for transition systems. We illustrate the application of stochastic trends with numerous small examples and then we analyse the AKAP model in order to characterise and show causality and pulsating behaviours in this biochemical system

    Stability, Causality, and Passivity in Electrical Interconnect Models

    Get PDF
    Modern packaging design requires extensive signal integrity simulations in order to assess the electrical performance of the system. The feasibility of such simulations is granted only when accurate and efficient models are available for all system parts and components having a significant influence on the signals. Unfortunately, model derivation is still a challenging task, despite the extensive research that has been devoted to this topic. In fact, it is a common experience that modeling or simulation tasks sometimes fail, often without a clear understanding of the main reason. This paper presents the fundamental properties of causality, stability, and passivity that electrical interconnect models must satisfy in order to be physically consistent. All basic definitions are reviewed in time domain, Laplace domain, and frequency domain, and all significant interrelations between these properties are outlined. This background material is used to interpret several common situations where either model derivation or model use in a computer-aided design environment fails dramatically.We show that the root cause for these difficulties can always be traced back to the lack of stability, causality, or passivity in the data providing the structure characterization and/or in the model itsel

    On interoperability and conformance assessment in service composition

    Get PDF
    The process of composing a service from other services typically involves multiple models. These models may represent the service from distinct perspectives, e.g., to model the different roles of systems involved in the service, and at distinct abstraction levels, e.g., to model the service’s capability, interface or the orchestration that implements the service. The consistency among these models needs to be maintained in order to guarantee the correctness of the composition process. Two types of consistency relations are distinguished: interoperability, which concerns the ability of different roles to interoperate, and conformance, which concerns the correct implementation of an abstract model by a more concrete model. This paper discusses the need for and use of techniques to assess interoperability and conformance in a service composition process. The paper shows how these consistency relations can be described and analysed using concepts from the COSMO framework. Examples are presented to illustrate how interoperability and conformance can be assessed
    • 

    corecore