383,493 research outputs found

    Verification in ACL2 of a Generic Framework to Synthesize SAT–Provers

    Get PDF
    We present in this paper an application of the ACL2 system to reason about propositional satisfiability provers. For that purpose, we present a framework where we define a generic transformation based SAT–prover, and we show how this generic framework can be formalized in the ACL2 logic, making a formal proof of its termination, soundness and completeness. This generic framework can be instantiated to obtain a number of verified and executable SAT–provers in ACL2, and this can be done in an automatized way. Three case studies are considered: semantic tableaux, sequent and Davis–Putnam methods.Ministerio de Ciencia y Tecnología TIC2000-1368-C03-0

    Formal verification of a generic framework to synthesize SAT-provers

    Get PDF
    We present in this paper an application of the ACL2 system to generate and reason about propositional satis ability provers. For that purpose, we develop a framework where we de ne a generic SAT-prover based on transformation rules, and we formalize this generic framework in the ACL2 logic, carrying out a formal proof of its termination, soundness and completeness. This generic framework can be instantiated to obtain a number of veri ed and executable SAT-provers in ACL2, and this can be done in an automated way. Three instantiations of the generic framework are considered: semantic tableaux, sequent and Davis-Putnam-Logeman-Loveland methods.Ministerio de Ciencia y Tecnología TIC2000-1368-C03-0

    Software Model Checking by Program Specialization

    Get PDF
    We introduce a general verification framework based on program specialization to prove properties of the runtime behaviour of imperative programs. Given a program P written in a programming language L and a property phi in a logic M, we can verify that phi holds for P by: (i) writing an interpreter I for L and a semantics S for M in a suitable metalanguage, (ii) specializing I and S with respect to P and phi, and (iii) analysing the specialized program by performing a further specialization. We have instantiated our framework to verify safety properties of a simple imperative language, called SIMP, extended with a nondeterministic choice operator. The method is fully automatic and it has been implemented using the MAP transformation system

    EU rural policy: proposal and application of an agricultural sustainability index

    Get PDF
    In this paper I propose an Agricultural Sustainability Index (ASI) starting from a ‘political’ perspective: European legislation in the rural sector. I try to answer these questions. How can we measure sustainability in agriculture? How do we measure the enhancement (if any) of the European policy for sustainability in agriculture? Why do some geographical areas perform better than others? Considering these questions, the paper suggests a model for measuring sustainability in agriculture and an approach to compare performances among different geographical contexts. The model puts together different dimensions of sustainability in agriculture, combining Geographical Information System (GIS) analysis and Multi-Criteria Analysis (MCA). Using eighteen agricultural indicators divided into three dimensions, social, economic and environmental, the model incorporates the following stages: (i) indicator specification and definition of the decisional framework; (ii) indicators' normalisation by means of transformation functions based on the fuzzy logic approach; (iii) indicators weighted by Analytic Hierarchy Process (AHP) techniques; (iv) indicators aggregated to obtain the ASI. The model is tested on a specific area: Alta Val d’Agri, a rural area in the southern Basilicata Region. Final results show that ASI consistently synthesises the evolution of thirty years of rural development policy.Agricultural sustainability, Indicators, GIS-MCA

    Automatic Transformation-Based Model Checking of Multi-agent Systems

    Get PDF
    Multi-Agent Systems (MASs) are highly useful constructs in the context of real-world software applications. Built upon communication and interaction between autonomous agents, these systems are suitable to model and implement intelligent applications. Yet these desirable features are precisely what makes these systems very challenging to design, and their compliance with requirements extremely difficult to verify. This explains the need for the development of techniques and tools to model, understand, and implement interacting MASs. Among the different methods developed, the design-time verification techniques for MASs based on model checking offer the advantage of being formal and fully automated. We can distinguish between two different approaches used in model checking MASs, the direct verification approach, and the transformation-based approach. This thesis focuses on the later that relies on formal reduction techniques to transform the problem of model checking a source logic into that of an equivalent problem of model checking a target logic. In this thesis, we propose a new transformation framework leveraging the model checking of the computation tree logic (CTL) and its NuSMV model checker to design and implement the process of transformation-based model checking for CTL-extension logics to MASs. The approach provides an integrated system with a rich set of features, designed to support the transformation process while simplifying the most challenging and error-prone tasks. The thesis presents and describes the tool built upon this framework and its different applications. A performance comparison with MCMAS, the model checker of MASs, is also discussed

    Adaptive Neural Subtractive Clustering Fuzzy Inference System for the Detection of High Impedance Fault on Distribution Power System

    Get PDF
    High impedance fault (HIF) is abnormal event on electric power distribution feeder which does not draw enough fault current to be detected by conventional protective devices. The algorithm for HIF detection based on the amplitude ratio of second and odd harmonics to fundamental is presented. This paper proposes an intelligent algorithm using an adaptive neural- Takagi Sugeno-Kang (TSK) fuzzy modeling approach based on subtractive clustering to detect high impedance fault. It is integrating the learning capabilities of neural network to the fuzzy logic system robustness in the sense that fuzzy logic concepts are embedded in the network structure. It also provides a natural framework for combining both numerical information in the form of input/output pairs and linguistic information in the form of IF– THEN rules in a uniform fashion. Fast Fourier Transformation (FFT) is used to extract the features of the fault signal and other power system events. The effect of capacitor banks switching, non-linear load current, no-load line switching and other normal event on distribution feeder harmonics is discussed. HIF and other operation event data were obtained by simulation of a 13.8 kV distribution feeder using PSCAD. The results show that the proposed algorithm can distinguish successfully HIFs from other events in distribution power syste

    A Term Rewrite System Framework for Code Carrying Theory

    Get PDF
    Security is a fundamental aspect of every architecture based on a number of actors that exchange information among them, and the growing ubiquity of mobile and distributed systems has accentuated the problem. Mobile code is software that is transferred between systems and executed on a local system without explicit installation by the recipient, even if it is delivered through an insecure network or retrieved from an untrusted source. During delivery, the code may be corrupted or a malicious cracker could change the code damaging the entire system. Code-Carrying Theory (CCT) is one of the technologies aimed at solving these problems. The key idea of CCT is based on proof-based program synthesis, where a set of axioms that define functions are provided by the code producer together with suitable proofs guaranteeing that defined functions obey certain requirements. The form of the function-defining axioms is such that it is easy to extract executable code from them. Thus, all that has to be transmitted from the producer to the consumer is a theory (a set of axioms and theorems) and a set of proofs of the theorems. There is no need to transmit code explicitly. Many transformation systems for program optimization, program synthesis, and program specialization are based on fold/unfold transformations. We discuss a fold/unfold–based transformation framework for rewriting logic theories which is based on narrowing. When performing program transformation, we end up with a final program which is semantically equal to the initial one. We consider possibly non-confluent and non-terminating rewriting logic theories, and the rules for transforming these rewriting logic theories (definition introduction, definition elimination, folding, unfolding) that preserves the rewriting logic semantics of the original theory. The process for obtaining a correct and efficient program can be split in two phases, which may be performed by different actors: the first phase is to write an initial maybe inefficient program whose correctness can be easily shown by hand or by automatic tools; in the second phase, the actor transforms the initial program by applying a certificate (a ordered set of instantiated rules of the framework) to derive a more efficient one. The transformation system is naturally extended to CCT: • Code Consumer provides the requirements to the code producer in the form of a rewrite theory. • The Code Producer uses the fold/unfold-based transformation system in order to obtain an efficient implementation of the specified functions. Subsequently, the producer will send only a Certificate to be used by the Code Consumer to derive the program. • Once the Certificate is received, the code consumer can apply the transformation sequence, described in the Certificate, to the initial theory, and the final program is automatically obtained. The strong correctness of the transformation system ensures that the obtained program is correct w.r.t. the initial Consumer specifications. So the Code Consumer does not need to check extra proofs provided by the Code Producer. If we apply a correct certificate it is impossible to reach terms that carry malicious code or improper operations. Although the code producer should ensure the correctness of the certificate and the framework for fold/unfold transformation ensures the correctness and completeness of the final program, there is no guarantee that the requirements of the transformation rules are met correctly. During delivery, the certicate might be corrupted, or a malicious hacker might change the code. Potential problems can be categorized as security problems (i.e., unauthorized access to data or system resources), safety problems (i.e., illegal operations), or functional incorrectness (i.e., the delivered code fails to satisfy a required relation between its input and output) If there is no automatic support, it is very easy for an expert malicious hacker to intercept the certificate through an insecure network, modify it and resend to the code consumer. We have demonstrated that we cannot apply a certificate regardless of its contents. We need to check that all the operation descriptions to be carried over to the system are lawful and all operations are done in the correct order. We implemented in a prototypical system, the transformation framework extended with the infrastructure for certificate checking, which consists of a suit of tools. The implementation is written in Maude easily using the reflection, Python and some scripts in Bash. So the Code Consumer which uses our framework can receive, check and apply a certificate to a initial theory; detect and refuse bad certificates with a detailed report; and then avoids data corruption or attacks from malicious actors

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling
    corecore