3,428 research outputs found

    Learning causal models that make correct manipulation predictions with time series data

    Get PDF
    One of the fundamental purposes of causal models is using them to predict the effects of manipulating various components of a system. It has been argued by Dash (2005, 2003) that the Do operator will fail when applied to an equilibrium model, unless the underlying dynamic system obeys what he calls Equilibration-Manipulation Commutability. Unfortunately, this fact renders most existing causal discovery algorithms unreliable for reasoning about manipulations. Motivated by this caveat, in this paper we present a novel approach to causal discovery of dynamic models from time series. The approach uses a representation of dynamic causal models motivated by Iwasaki and Simon (1994), which asserts that all “causation across time" occurs because a variable’s derivative has been affected instantaneously. We present an algorithm that exploits this representation within a constraint-based learning framework by numerically calculating derivatives and learning instantaneous relationships. We argue that due to numerical errors in higher order derivatives, care must be taken when learning causal structure, but we show that the Iwasaki-Simon representation reduces the search space considerably, allowing us to forego calculating many high-order derivatives. In order for our algorithm to discover the dynamic model, it is necessary that the time-scale of the data is much finer than any temporal process of the system. Finally, we show that our approach can correctly recover the structure of a fairly complex dynamic system, and can predict the effect of manipulations accurately when a manipulation does not cause an instability. To our knowledge, this is the first causal discovery algorithm that has demonstrated that it can correctly predict the effects of manipulations for a system that does not obey the EMC condition

    Monitoring for a decidable fragment of MTL-∫

    Get PDF
    Temporal logics targeting real-time systems are traditionally undecidable. Based on a restricted fragment of MTL-R, we propose a new approach for the runtime verification of hard real-time systems. The novelty of our technique is that it is based on incremental evaluation, allowing us to e↵ectively treat duration properties (which play a crucial role in real-time systems). We describe the two levels of operation of our approach: offline simplification by quantifier removal techniques; and online evaluation of a three-valued interpretation for formulas of our fragment. Our experiments show the applicability of this mechanism as well as the validity of the provided complexity results

    Brightness of synchrotron radiation from wigglers

    Full text link
    According to literature, while calculating the brightness of synchrotron radiation from wigglers, one needs to account for the so called `depth-of-field' effects. In fact, the particle beam cross section varies along the wiggler. It is usually stated that the effective photon source size increases accordingly, while the brightness is reduced. Here we claim that this is a misconception originating from an analysis of the wiggler source based on geometrical arguments, regarded as almost self-evident. According to electrodynamics, depth-of-field effects do not exist: we demonstrate this statement both theoretically and numerically, using a well-known first-principle computer code. This fact shows that under the usually accepted approximations, the description of the wiggler brightness turns out to be inconsistent even qualitatively. Therefore, there is a need for a well-defined procedure for computing the brightness from a wiggler source. We accomplish this task based on the use of a Wigner function formalism. In the geometrical optics limit computations can be performed analytically. Within this limit, we restrict ourselves to the case of the beam size-dominated regime, which is typical for synchrotron radiation facilities in the X-ray wavelength range. We give a direct demonstration of the fact that the apparent horizontal source size is broadened in proportion to the beamline opening angle and to the length of the wiggler. While this effect is well-understood, a direct proof appears not to have been given elsewhere. We consider the problem of the calculation of the wiggler source size by means of numerical simulations alone, which play the same role of an experiment. We report a significant numerical disagreement between exact calculations and approximations currently used in literature.Comment: arXiv admin note: substantial text overlap with arXiv:1407.459

    What use are formal design and analysis methods to telecommunications services?

    Get PDF
    Have formal methods failed, or will they fail, to help us solve problems of detecting and resolving of feature interactions in telecommunications software? This paper contains SWOT(Strengths, Weaknesses, Opportunities and Threats) analysis of the use of formula design and analysis methods in feature interaction analysis and makes some suggestions for future research

    A Formal Framework for Concrete Reputation Systems

    Get PDF
    In a reputation-based trust-management system, agents maintain information about the past behaviour of other agents. This information is used to guide future trust-based decisions about interaction. However, while trust management is a component in security decision-making, many existing reputation-based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents’ past histories. We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us to encode other policies known from the literature, e.g., ‘one-out-of-k’. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables

    Computer-aided verification in mechanism design

    Full text link
    In mechanism design, the gold standard solution concepts are dominant strategy incentive compatibility and Bayesian incentive compatibility. These solution concepts relieve the (possibly unsophisticated) bidders from the need to engage in complicated strategizing. While incentive properties are simple to state, their proofs are specific to the mechanism and can be quite complex. This raises two concerns. From a practical perspective, checking a complex proof can be a tedious process, often requiring experts knowledgeable in mechanism design. Furthermore, from a modeling perspective, if unsophisticated agents are unconvinced of incentive properties, they may strategize in unpredictable ways. To address both concerns, we explore techniques from computer-aided verification to construct formal proofs of incentive properties. Because formal proofs can be automatically checked, agents do not need to manually check the properties, or even understand the proof. To demonstrate, we present the verification of a sophisticated mechanism: the generic reduction from Bayesian incentive compatible mechanism design to algorithm design given by Hartline, Kleinberg, and Malekian. This mechanism presents new challenges for formal verification, including essential use of randomness from both the execution of the mechanism and from the prior type distributions. As an immediate consequence, our work also formalizes Bayesian incentive compatibility for the entire family of mechanisms derived via this reduction. Finally, as an intermediate step in our formalization, we provide the first formal verification of incentive compatibility for the celebrated Vickrey-Clarke-Groves mechanism

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Formal methods for a system of systems analysis framework applied to traffic management

    Get PDF
    Formal methods for systems and system of systems engineering (SoSE) can bring precision to architecting and design, and increased trustworthiness in verification; but they require the use of formal languages that are not broadly comprehensible to the various stakeholders. The evolution of Model Based Systems Engineering (MBSE) using the Systems Modeling Language (SysML) lies in a middle ground between legacy document-based SoSE and formal methods. SysML is a graphical language but not a formal language. Initiatives in the Object Management Group (OMG), such as the development of the Foundational Unified Modeling Language (fUML) seek to bring precise semantics to object-oriented modeling languages. Following the philosophy of fUML, we offer a framework for associating precise semantics with Unified Modeling Language (UML) and SysML models essential for SoSE architecting and design. Straightforward methods are prescribed to develop the essential models and to create semantic transformations between them. Matrix representations can be used to perform analyses that are concordant with the system of UML or SysML models that represent the system or SoS. The framework and methods developed in this paper are applied to a Traffic Management system of systems (TMSoS) that has been a subject of research presented at previous IEEE SoSE conferences

    Technology assessment of advanced automation for space missions

    Get PDF
    Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology
    corecore