170 research outputs found

    Issues in integrating existing multi-agent systems for power engineering applications

    Get PDF
    Multi-agent systems (MAS) have proven to be an effective platform for diagnostic and condition monitoring applications in the power industry. For example, a multi-agent system architecture, entitled condition monitoring multi-agent system (COMMAS) (McArthur et al., 2004), has been applied to the ultra high frequency (UHF) monitoring of partial discharge activity inside transformers. Additionally, a multi-agent system, entitled protection engineering diagnostic agents (PEDA) (Hossack et al., 2003), has demonstrated the use of MAS technology for automated and enhanced post-fault analysis of power systems disturbances based on SCADA and digital fault recorder (DFR) data. In this paper, the authors propose the integration of COMMAS and PEDA as a means of offering enhanced decision support to engineers tasked with managing transformer assets. By providing automatically interpreted data related to condition monitoring and power system disturbances, the proposed integrated system offer engineers a more comprehensive picture of the health of a given transformer. Defects and deterioration in performance can be correlated with the operating conditions it experiences. The integration of COMMAS and PEDA has highlighted the issues inherent to the inter-operation of existing multi-agent systems and, in particular, the issues surrounding the use of differing ontologies. The authors believe that these issues need to be addressed if there is to be widespread deployment of MAS technology within the power industry. This paper presents research undertaken to integrate the two MAS and to deal with ontology issues

    UNITY and BĂŒchi automata

    Get PDF
    UNITY is a model for concurrent specifications with a complete logic for proving progress properties of the form ``PP leads to QQ''. UNITY is generalized to U-specifications by giving more freedom to specify the steps that are to be taken infinitely often. In particular, these steps can correspond to non-total relations. The generalization keeps the logic sound and complete. The paper exploits the generalization in two ways. Firstly, the logic remains sound when the specification is extended with hypotheses of the form ``FF leads to GG''. As the paper shows, this can make the logic incomplete. The generalization is used to show that the logic remains complete, if the added hypotheses ``FF leads to GG'' satisfy ``FF unless GG''. The main result extends the applicability and completeness of UNITY logic to proofs that a given concurrent program satisfies any given formula of LTL, linear temporal logic, without the next-operator which is omitted because it is sensitive to stuttering. For this purpose, the program, written as a UNITY program, is extended with a number of boolean variables. The proof method relies on implementing the LTL formula, i.e., restricting the specification in such a way that only those runs remain that satisfy the formula. This result is a variation of the classical construction of a B\"uchi automaton for a given LTL formula that accepts precisely those runs that satisfy the formula

    A Formal Approach to Combining Prospective and Retrospective Security

    Get PDF
    The major goal of this dissertation is to enhance software security by provably correct enforcement of in-depth policies. In-depth security policies allude to heterogeneous specification of security strategies that are required to be followed before and after sensitive operations. Prospective security is the enforcement of security, or detection of security violations before the execution of sensitive operations, e.g., in authorization, authentication and information flow. Retrospective security refers to security checks after the execution of sensitive operations, which is accomplished through accountability and deterrence. Retrospective security frameworks are built upon auditing in order to provide sufficient evidence to hold users accountable for their actions and potentially support other remediation actions. Correctness and efficiency of audit logs play significant roles in reaching the accountability goals that are required by retrospective, and consequently, in-depth security policies. This dissertation addresses correct audit logging in a formal framework. Leveraging retrospective controls beside the existing prospective measures enhances security in numerous applications. This dissertation focuses on two major application spaces for in-depth enforcement. The first is to enhance prospective security through surveillance and accountability. For example, authorization mechanisms could be improved by guaranteed retrospective checks in environments where there is a high cost of access denial, e.g., healthcare systems. The second application space is the amelioration of potentially flawed prospective measures through retrospective checks. For instance, erroneous implementations of input sanitization methods expose vulnerabilities in taint analysis tools that enforce direct flow of data integrity policies. In this regard, we propose an in-depth enforcement framework to mitigate such problems. We also propose a general semantic notion of explicit flow of information integrity in a high-level language with sanitization. This dissertation studies the ways by which prospective and retrospective security could be enforced uniformly in a provably correct manner to handle security challenges in legacy systems. Provable correctness of our results relies on the formal Programming Languages-based approach that we have taken in order to provide software security assurance. Moreover, this dissertation includes the implementation of such in-depth enforcement mechanisms for a medical records web application

    A Logical Interpretation of Powerdomains

    Get PDF
    This paper characterizes the powerdomain constructions which have been used in the semantics of programming languages in terms of formulas of first order logic under a pre-ordering of provable implication. The goal is to reveal the basic logical significance of the powerdomains by casting them in the right setting. Such a treatment may contribute to a better understanding of their potential uses in areas which deal with concepts of sets and partial information such as databases and artificial intelligence. Extended examples relating powerdomains to databases are provided. A new powerdomain is introduced and discussed in comparison with a similar operator from database theory. The new powerdomain is motivated by the logical characterizations of the three well-known powerdomains and is itself characterized by formulas of first order logic

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    The Mixed Powerdomain

    Get PDF
    This paper characterizes the powerdomain constructions which have been used in the semantics of programming languages in terms of formulas of first order logic under a preordering of provable implication. The goal is to reveal the basic logical significance of the powerdomain elements by casting them in the right setting. Such a treatment may contribute to a better understanding of their potential uses in areas which deal with concepts of sets and partial information such as databases and computational linguistics. This way of viewing powerdomain elements suggests a new form of powerdomain - called the mixed powerdomain - which expresses data in a different way from the well-known constructions from programming semantics. It is shown that the mixed powerdomain has many of the properties associated with the convex powerdomain such as the possiblity of solving recursive equations and a simple algebraic characterization

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Graph-based broad-coverage semantic parsing

    Get PDF
    Many broad-coverage meaning representations can be characterized as directed graphs, where nodes represent semantic concepts and directed edges represent semantic relations among the concepts. The task of semantic parsing is to generate such a meaning representation from a sentence. It is quite natural to adopt a graph-based approach for parsing, where nodes are identified conditioning on the individual words, and edges are labeled conditioning on the pairs of nodes. However, there are two issues with applying this simple and interpretable graph-based approach for semantic parsing: first, the anchoring of nodes to words can be implicit and non-injective in several formalisms (Oepen et al., 2019, 2020). This means we do not know which nodes should be generated from which individual word and how many of them. Consequently, it makes a probabilistic formulation of the training objective problematical; second, graph-based parsers typically predict edge labels independent from each other. Such an independence assumption, while being sensible from an algorithmic point of view, could limit the expressiveness of statistical modeling. Consequently, it might fail to capture the true distribution of semantic graphs. In this thesis, instead of a pipeline approach to obtain the anchoring, we propose to model the implicit anchoring as a latent variable in a probabilistic model. We induce such a latent variable jointly with the graph-based parser in an end-to-end differentiable training. In particular, we test our method on Abstract Meaning Representation (AMR) parsing (Banarescu et al., 2013). AMR represents sentence meaning with a directed acyclic graph, where the anchoring of nodes to words is implicit and could be many-to-one. Initially, we propose a rule-based system that circumvents the many-to-one anchoring by combing nodes in some pre-specified subgraphs in AMR and treats the alignment as a latent variable. Next, we remove the need for such a rule-based system by treating both graph segmentation and alignment as latent variables. Still, our graph-based parsers are parameterized by neural modules that require gradient-based optimization. Consequently, training graph-based parsers with our discrete latent variables can be challenging. By combing deep variational inference and differentiable sampling, our models can be trained end-to-end. To overcome the limitation of graph-based parsing and capture interdependency in the output, we further adopt iterative refinement. Starting with an output whose parts are independently predicted, we iteratively refine it conditioning on the previous prediction. We test this method on semantic role labeling (Gildea and Jurafsky, 2000). Semantic role labeling is the task of predicting the predicate-argument structure. In particular, semantic roles between the predicate and its arguments need to be labeled, and those semantic roles are interdependent. Overall, our refinement strategy results in an effective model, outperforming strong factorized baseline models
    • 

    corecore