142 research outputs found

    Spotlight Abstraction of Agents and Areas

    Get PDF
    We present "spotlight abstraction" as a generic abstraction technique for the analysis of systems comprising an unbounded number of communicating agents. The abstraction principle is heterogeneous in the sense that the behaviour of a finite number of agents is preserved while the others are only abstractly represented. The precision of the abstraction can be tuned by an iterative procedure based on the analysis of counterexamples. Going beyond existing work, we show how to use the spotlight principle for analysing systems where the physical position of agents is relevant. To this end, we put the spotlight on areas rather than on fixed sets of agents

    Counterexamples Revisited: Principles, Algorithms, Applications

    Full text link
    Abstract. Algorithmic counterexample generation is a central feature of model checking which sets the method apart from other approaches such as theorem proving. The practical value of counterexamples to the verification engineer is evident, and for many years, counterexam-ple generation algorithms have been employed in model checking sys-tems, even though they had not been subject to an adequate fundamen-tal investigation. Recent advances in model checking technology such as counterexample-guided abstraction refinement have put strong em-phasis on counterexamples, and have lead to renewed interest both in fundamental and pragmatic aspects of counterexample generation. In this paper, we survey several key contributions to the subject includ-ing symbolic algorithms, results about the graph-theoretic structure of counterexamples, and applications to automated abstraction as well as software verification. Irrefutability is not a virtue of a theory (as people often think) but a vice

    A compositional analysis of broadcasting embedded systems

    Get PDF
    This work takes as its starting point D Kendall's CANdle/bCANdle algebraic framework for formal modelling and specification of broadcasting embedded systems based on CAN networks. Checking real-time properties of such systems is beset by problems of state-space explosion and so a scheme is given for recasting systems specified in Kendall's framework as parallel compositions of timed automata; a CAN network channel is modelled as an automaton. This recasting is shown to be bi-similar to the original bCANdle model. In the recast framework,"compositionality" theorems allow one to infer that a model of a system is simulated by some abstraction of the model, and hence that properties of the model expressible in ACTL can be inferred from analogous properties of the abstraction. These theorems are reminiscent of "assume-guarantee" reasoning allowing one to build simulations component-wise although, unfortunately, components participating in a "broadcast" are required to be abstracted "atomically". Case studies are presented to show how this can be used in practice, and how systems which take impossibly long to model-check can tackled by compositional methods. The work is of broader interest also, as the models are built as UPPAAL systems and the compositionality theorems apply to any UPPAAL system in which the components do not share local variables. The method could for instance extend to systems using some network other than CAN, provided it can be modelled by timed automata. Possibilities also exist for building it into an automated tool, complementing other methods such as counterexample- guided abstraction refinement

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    A compositional analysis of broadcasting embedded systems

    Get PDF
    This work takes as its starting point D Kendall's CANdle/bCANdle algebraic framework for formal modelling and specification of broadcasting embedded systems based on CAN networks. Checking real-time properties of such systems is beset by problems of state-space explosion and so a scheme is given for recasting systems specified in Kendall's framework as parallel compositions of timed automata; a CAN network channel is modelled as an automaton. This recasting is shown to be bi-similar to the original bCANdle model. In the recast framework,"compositionality" theorems allow one to infer that a model of a system is simulated by some abstraction of the model, and hence that properties of the model expressible in ACTL can be inferred from analogous properties of the abstraction. These theorems are reminiscent of "assume-guarantee" reasoning allowing one to build simulations component-wise although, unfortunately, components participating in a "broadcast" are required to be abstracted "atomically". Case studies are presented to show how this can be used in practice, and how systems which take impossibly long to model-check can tackled by compositional methods. The work is of broader interest also, as the models are built as UPPAAL systems and the compositionality theorems apply to any UPPAAL system in which the components do not share local variables. The method could for instance extend to systems using some network other than CAN, provided it can be modelled by timed automata. Possibilities also exist for building it into an automated tool, complementing other methods such as counterexample- guided abstraction refinement.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Fundamental Approaches to Software Engineering

    Get PDF
    This open access book constitutes the proceedings of the 25th International Conference on Fundamental Approaches to Software Engineering, FASE 2022, which was held during April 4-5, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 17 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. The proceedings also contain 3 contributions from the Test-Comp Competition. The papers deal with the foundations on which software engineering is built, including topics like software engineering as an engineering discipline, requirements engineering, software architectures, software quality, model-driven development, software processes, software evolution, AI-based software engineering, and the specification, design, and implementation of particular classes of systems, such as (self-)adaptive, collaborative, AI, embedded, distributed, mobile, pervasive, cyber-physical, or service-oriented applications

    Theoretical analysis of the philosophy and practice of disciplined inquiry

    Get PDF
    2015 Spring.Includes bibliographical references.This dissertation theoretically examined the process of disciplined inquiry in the social sciences from its philosophical foundations to its extensions into practice. Key to conceptualization of disciplined inquiry were two regulative ideals: the commitment to the concepts that define the possibility of experience and the commitment to processes for combining the concepts of experience. The paradigm theory of Lincoln, Lynham, and Guba (e.g., Lincoln & Lynham, 2011; Lincoln, Lynham, & Guba, 2011) provided a sophisticated explanation of the possibility of experience that inquirers can commit to when engaging in disciplined inquires. Review of literature revealed an inadequacy in the state of theoretical understanding of processes for combining the concepts of experience. To develop a theoretical agenda of research for disciplined inquiry, the literature on paradigm theory and theory building was analyzed. A historical analysis of paradigm theory revealed milestones in more than 40 years of inquiry focused on conceptualization of the theory. A reverse engineering analysis theoretically examined paradigm theory and its milestones identified from the historical analysis for key features of the theoretical process. A revised conceptualization of disciplined inquiry was presented and a theoretical agenda for developing the underlying theoretical framework for the processes of combining the concepts of experience was outlined

    Constructions and justifications of a generalization of Viviani's theorem.

    Get PDF
    Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2013.This qualitative study actively engaged a group of eight pre-service mathematics teachers (PMTs) in an evolutionary process of generalizing and justifying. It was conducted in a developmental context and underpinned by a strong constructivist framework. Through using a set of task based activities embedded in a dynamic geometric context, this study firstly investigated how the PMTs experienced the reconstruction of Viviani’s theorem via the processes of experimentation, conjecturing, generalizing and justifying. Secondly, it was investigated how they generalized Viviani’s result for equilateral triangles, further across to a sequence of higher order equilateral (convex) polygons such as the rhombus, pentagon, and eventually to ‘any’ convex equi-sided polygon, with appropriate forms of justifications. This study also inquired how PMTs experienced counter-examples from a conceptual change perspective, and how they modified their conjecture generalizations and/or justifications, as a result of such experiences, particularly in instances where such modifications took place. Apart from constructivsm and conceptual change, the design of the activities and the analysis of students’ justifications was underpinned by the distinction of the so-called ‘explanatory’ and ‘discovery’ functions of proof. Analysis of data was grounded in an analytical–inductive method governed by an interpretive paradigm. Results of the study showed that in order for students to reconstruct Viviani’s generalization for equilateral triangles, the following was required for all students: *experimental exploration in a dynamic geometry context; *experiencing cognitive conflict to their initial conjecture; *further experimental exploration and a reformulation of their initial conjecture to finally achieve cognitive equilibrium. Although most students still required the aforementioned experiences again as they extended the Viviani generalization for equilateral triangles to equilateral convex polygons of 4 sides (rhombi) and five sides (pentagons), the need for experimental exploration gradually subsided. All PMTs expressed a need for an explanation as to why their equilateral triangle conjecture generalization was always true, and were only able to construct a logical explanation through scaffolded guidance with the means of a worksheet. The majority of the PMTs (i.e. six out of eight) extended the Viviani generalization to the rhombus on empirical grounds using Sketchpad while two did so on analogical grounds but superficially. However, as the PMTs progressed to the equilateral pentagon (convex) problem, the majority generalized on either inductive grounds or analogical grounds without the use of Sketchpad. Finally all of them generalized to any convex equi-sided polygon on logical grounds. In so doing it seems that all the PMTs finally cut off their ontological bonds with their earlier forms or processes of making generalizations. This conceptual growth pattern was also exhibited in the ways the PMTs justified each of their further generalizations, as they were progressively able to see the general proof through particular proofs, and hence justify their deductive generalization of Viviani’s theorem. This study has also shown that the phenomenon of looking back (folding back) at their prior explanations assisted the PMTs to extend their logical explanations to the general equi-sided polygon. This development of a logical explanation (proof) for the general case after looking back and carefully analysing the statements and reasons that make up the proof argument for the prior particular cases (i.e. specific equilateral convex polygons), namely pentagon, rhombus and equilateral triangle, emulates the ‘discovery’ function of proof. This suggests that the ‘explanatory’ function of proof compliments and feeds into the ‘discovery’ function of proof. Experimental exploration in a dynamic geometry context provided students with a heuristic counterexample to their initial conjectures that caused internal cognitive conflict and surprise to the extent that their cognitive equilibrium became disturbed. This paved the way for conceptual change to occur through the modification of their postulated conjecture generalizations. Furthermore, this study has shown that there exists a close link between generalization and justification. In particular, justifications in the form of logical explanations seemed to have helped the students to understand and make sense as to why their generalizations were always true, but through considering their justifications for their earlier generalizations (equilateral triangle, rhombus and pentagon) students were able to make their generalization to any convex equi-sided polygon on deductive grounds. Thus, with ‘deductive’ generalization as shown by the students, especially in the final stage, justification was woven into the generalization itself. In conclusion, from a practitioner perspective, this study has provided a descriptive analysis of a ‘guided approach’ to both the further constructions and justifications of generalizations via an evolutionary process, which mathematics teachers could use as models for their own attempts in their mathematics classrooms
    • …
    corecore