44 research outputs found

    The Omnibus language and integrated verification approach

    Get PDF
    This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed. The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The use of the integrated verification approach to meet two key requirements of safe software component reuse, to have clear descriptions and some form of certification, are discussed along with the specialised facilities provided by the Omnibus tool to manage the distribution of components. The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic. The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application

    Doctor of Philosophy

    Get PDF
    dissertationTrusted computing base (TCB) of a computer system comprises components that must be trusted in order to support its security policy. Research communities have identified the well-known minimal TCB principle, namely, the TCB of a system should be as small as possible, so that it can be thoroughly examined and verified. This dissertation is an experiment showing how small the TCB for an isolation service is based on software fault isolation (SFI) for small multitasking embedded systems. The TCB achieved by this dissertation includes just the formal definitions of isolation properties, instruction semantics, program logic, and a proof assistant, besides hardware. There is not a compiler, an assembler, a verifier, a rewriter, or an operating system in the TCB. To the best of my knowledge, this is the smallest TCB that has ever been shown for guaranteeing nontrivial properties of real binary programs on real hardware. This is accomplished by combining SFI techniques and high-confidence formal verification. An SFI implementation inserts dynamic checks before dangerous operations, and these checks provide necessary invariants needed by the formal verification to prove theorems about the isolation properties of ARM binary programs. The high-confidence assurance of the formal verification comes from two facts. First, the verification is based on an existing realistic semantics of the ARM ISA that is independently developed by Cambridge researchers. Second, the verification is conducted in a higher-order proof assistant-the HOL theorem prover, which mechanically checks every verification step by rigorous logic. In addition, the entire verification process, including both specification generation and verification, is automatic. To support proof automation, a novel program logic has been designed, and an automatic reasoning framework for verifying shallow safety properties has been developed. The program logic integrates Hoare-style reasoning and Floyd's inductive assertion reasoning together in a small set of definitions, which overcomes shortcomings of Hoare logic and facilitates proof automation. All inference rules of the logic are proven based on the instruction semantics and the logic definitions. The framework leverages abstract interpretation to automatically find function specifications required by the program logic. The results of the abstract interpretation are used to construct the function specifications automatically, and the specifications are proven without human interaction by utilizing intermediate theorems generated during the abstract interpretation. All these work in concert to create the very small TCB

    Analyses on tech-enhanced and anonymous Peer Discussion as well as anonymous Control Facilities for tech-enhanced Learning

    Get PDF
    An increasing number of university freshmen has been observable in absolute number as well as percentage of population over the last decade. However, at the same time the drop-out rate has increased significantly. While a drop in attendance could be observed at the same time, statistics show that young professionals consider only roughly thirty percent of their qualification to originate in their university education. Taking this into consideration with the before mentioned, one conclusion could be that students fail to see the importance of fundamental classes and choose to seek knowledge elsewhere, for example in free online courses. However, the so acquired knowledge is a non-attributable qualification. One solution to this problem must be to make on-site activities more attractive. A promising approach for raised attractiveness would be to support students in self-regulated learning processes, making them experience importance and value of own decisions based on realistic self-assessment and self-evaluation. At the same time, strict ex-cathedra teaching should be replaced by interactive forms of education, ideally activating on a meta-cognitive level. Particularly, as many students bring mobile communication devices into classes, this promising approach could be extended by utilising these mobile devices as second screens. That way, enhanced learning experiences can be provided. The basic idea is simple, namely to contribute to psychological concepts with the means of computer science. An example for this idea are audience response systems. There has been numerous research into these and related approaches for university readings, but other forms of education have not been sufficiently considered, for example tutorials. This technological aspect can be combined with recent didactics research and concepts like peer instruction or visible learning. Therefore, this dissertation presents an experimental approach at providing existing IT solutions for on-site tutorials, specifically tools for audience responses, evaluations, learning demand assessments, peer discussion, and virtual interactive whiteboards. These tools are provided under observation of anonymity and cognisant incidental utilisation. They provide insight into students\' motivation to attend classes, their motivation to utilise tools, and into their tool utilisation itself. Experimental findings are combined into an extensible system concept consisting of three major tool classes: anonymous peer discussion means, anonymous control facilities, and learning demand assessment. With the exception of the latter, promising findings in context of tutorials are presented, for example the reduction of audience response systems to an emergency brake, the versatility of (peer) discussion systems, or a demand for retroactive deanonymisation of contributions. The overall positive impact of tool utilisation on motivation to attend and perceived value of tutorials is discussed and supplemented by a positive impact on the final exams\' outcomes.:List of Definitions, Theorems and Proofs List of Figures List of Tables Introduction and Motivation Part I: Propaedeutics 1 Working Theses 1.1 Definitions 1.2 Context of Working Theses and Definitions 2 Existing Concepts 2.1 Psychology 2.1.1 Self-Regulation and self-regulated Learning 2.1.2 Peer Instruction, Peer Discussion 2.1.3 Learning Process Supervision: Learning Demand Assessment 2.1.4 Cognitive Activation 2.1.5 Note on Gamification 2.1.6 Note on Blended Learning 2.2 Computer Science 2.2.1 Learning Platforms 2.2.2 Audience Response Systems (ARS) 2.2.3 Virtual Interactive Whiteboard Systems (V-IWB) 2.2.4 Cognisant Incidential Utilisation (CIU) 2.3 Appraisal 3 Related Work 3.1 Visible Learning 3.2 auditorium 3.3 Auditorium Mobile Classroom Service 3.4 ARSnova and other Audience Response Systems 3.5 Google Classroom 3.6 StackOverflow 3.7 AwwApp Part II: Proceedings 4 Global Picture and Prototype 4.1 Global Picture 4.2 System Architecture 4.2.1 Anonymous Discussion Means 4.2.2 Anonymous Control Facilities 4.3 Implementation 4.3.1 The Prototype 5 Investigated Tools 5.1 Note on Methodology 5.2 Anonymity 5.2.1 Methodology 5.2.2 Visible Learning Effects 5.2.3 Assertion 5.2.4 Experiments 5.2.5 Results 5.2.6 Conclusions 5.3 Learning Demand Assessment 5.3.1 Methodology 5.3.2 Visible Learning Effects 5.3.3 Tool Description 5.3.4 Assertion 5.3.5 Experiments 5.3.6 Results 5.3.7 Conclusions 5.4 Peer Discussion System 5.4.1 Methodology 5.4.2 Visible Learning Effects 5.4.3 Tool Description 5.4.4 Assertion 5.4.5 Experiments 5.4.6 Results 5.4.7 Conclusions 5.5 Virtual Interactive Whiteboard 5.5.1 Methodology 5.5.2 Visible Learning Effects 5.5.3 Tool Description 5.5.4 Assertion 5.5.5 Experiments 5.5.6 Results 5.5.7 Conclusions 5.6 Audience Response System and Emergency Brake 5.6.1 Methodology 5.6.2 Visible Learning Effects 5.6.3 Tool Description 5.6.4 Assertion 5.6.5 Experiments 5.6.6 Results 5.6.7 Conclusions 5.7 Evaluation System 5.7.1 Methodology 5.7.2 Visible Learning Effects 5.7.3 Tool Description 5.7.4 Assertion 5.7.5 Experiments 5.7.6 Results and Conclusion 6 Exam Outcome 7 Utilisation and Motivation 7.1 Prototype Utilisation 7.2 Motivational Aspects Part III: Appraisal 8 Lessons learned 9 Discussion 9.1 Working Theses’ Validity 9.2 Research Community: Impact and Outlook 9.2.1 Significance to Learning Psychology 9.3 Possible Extension of existing Solutions 10 Conclusion 10.1 Summary of scientific Contributions 10.2 Future Work Part IV: Appendix A Experimental Arrangement B Questionnaires B.1 Platform Feedback Sheet B.1.1 Original PFS in 2014 B.1.2 Original PFS in 2015 B.2 Minute Paper B.3 Motivation and Utilisation Questionnaires B.3.1 Motivation 2013 and 2014 B.3.2 Motivation 2015 B.3.3 Utilisation 2014 B.3.4 Utilisation 2015, Rev. I B.3.5 Utilisation 2015, Rev. II C References C.1 Auxiliary Means D Publications D.1 Original Research Contributions D.2 Student Theses E Glossary F Index G Milestones AcknowledgementsÜber die vergangene Dekade ist eine zunehmende Zahl StudienanfĂ€nger beobachtbar, sowohl in der absoluten Anzahl, als auch im Bevölkerungsanteil. DemgegenĂŒber steht aber eine ĂŒberproportional hohe Steigerung der Abbruchquote. WĂ€hrend gleichzeitig die Anwesenheit in universitĂ€ren Lehrveranstaltungen sinkt, zeigen Statistiken, dass nur etwa ein Drittel der Berufseinsteiger die Grundlagen ihrer Qualifikation im Studium sieht. Daraus könnte man ableiten, dass Studierende den Wert und die Bedeutung universitĂ€rer Ausbildung unterschĂ€tzen und stattdessen Wissen in anderen Quellen suchen, beispielsweise unentgeltlichen Online-Angeboten. Das auf diese Art angeeignete Wissen stellt aber eine formell nicht nachweise Qualifikation dar. Ein Weg aus diesem Dilemma muss die Steigerung der AttraktivitĂ€t der universitĂ€ren Lehrveranstaltungen sein. Ein vielversprechender Ansatz ist die UnterstĂŒtzung der Studierenden im selbst-regulierten Lernen, wodurch sie die Wichtigkeit und den Wert eigener Entscheidung(sfindungsprozesse) auf Basis realistischer SelbsteinschĂ€tzung und Selbstevaluation erlernen. Gleichzeitig sollte Frontalunterricht durch interaktive Lehrformen ersetzt werden, idealerweise durch Aktivierung auf meta-kognitiver Ebene. Dies ist vielversprechend insbesondere, weil viele Studierende ihre eigenen mobilen EndgerĂ€te in Lehrveranstaltungen bringen. Diese GerĂ€te können als Second Screen fĂŒr die neuen Lehrkonzepte verwendet werden. Auf diese Art kann dann eine verbesserte Lernerfahrung vermittelt werden. Die Grundidee ist simpel, nĂ€mlich in der Psychologie bewĂ€hrte Didaktik-Konzepte durch die Mittel der Informatik zu unterstĂŒtzen. Ein Beispiel dafĂŒr sind Audience Response Systeme, die hinlĂ€nglich im Rahmen von Vorlesungen untersucht worden sind. Andere Lehrformen wurden dabei jedoch unzureichend berĂŒcksichtigt, beispielsweise Tutorien. Ähnliche Überlegungen gelten natĂŒrlich auch fĂŒr bewĂ€hrte didaktische Konzepte wie Peer Instruction oder Betrachtungen in Form von Visible Learning. Deshalb prĂ€sentiert diese Dissertation einen experimentellen Ansatz, informationstechnische Lösungen fĂŒr vor-Ort-Übungen anzubieten, nĂ€mlich Werkzeuge fĂŒr Audience Response Systeme, Evaluationen, Lernbedarfsermittlung, Peer Discussion, sowie virtuelle interaktive Whiteboards. Die genannten Werkzeuge wurden unter Beachtung von AnonymitĂ€ts- und BeilĂ€ufigkeitsaspekten bereitgestellt. Sie erlauben einen Einblick in die Motivation der Studierenden Tutorien zu besuchen und die Werkzeuge zu nutzen, sowie ihr Nutzungsverhalten selbst. Die experimentellen Ergebnisse werden in ein erweiterbares Systemkonzept kombiniert, das drei Werkzeugklassen unterstĂŒtzt: anonyme Peer Discussion, anonyme Kontrollwerkzeuge und Lernbedarfsermittlung. FĂŒr die ersten beiden Klassen liegen vielversprechende Ergebnisse vor, beispielsweise die notwendige Reduktion des Audience Response Systems auf eine Art Notbremse, die Vielseitigkeit von (Peer-)Discussion-Systemen, oder aber auch der Bedarf fĂŒr eine retroaktive Deanonymisierung von initial anonymen BeitrĂ€gen. Der allgemein positive Einfluss der Werkzeugnutzung auf die Motivation an Tutorien teilzunehmen sowie den wahrgenommenen Wert der Tutorien werden abschließend diskutiert und durch verbesserte Abschlussklausurergebnisse untermauert.:List of Definitions, Theorems and Proofs List of Figures List of Tables Introduction and Motivation Part I: Propaedeutics 1 Working Theses 1.1 Definitions 1.2 Context of Working Theses and Definitions 2 Existing Concepts 2.1 Psychology 2.1.1 Self-Regulation and self-regulated Learning 2.1.2 Peer Instruction, Peer Discussion 2.1.3 Learning Process Supervision: Learning Demand Assessment 2.1.4 Cognitive Activation 2.1.5 Note on Gamification 2.1.6 Note on Blended Learning 2.2 Computer Science 2.2.1 Learning Platforms 2.2.2 Audience Response Systems (ARS) 2.2.3 Virtual Interactive Whiteboard Systems (V-IWB) 2.2.4 Cognisant Incidential Utilisation (CIU) 2.3 Appraisal 3 Related Work 3.1 Visible Learning 3.2 auditorium 3.3 Auditorium Mobile Classroom Service 3.4 ARSnova and other Audience Response Systems 3.5 Google Classroom 3.6 StackOverflow 3.7 AwwApp Part II: Proceedings 4 Global Picture and Prototype 4.1 Global Picture 4.2 System Architecture 4.2.1 Anonymous Discussion Means 4.2.2 Anonymous Control Facilities 4.3 Implementation 4.3.1 The Prototype 5 Investigated Tools 5.1 Note on Methodology 5.2 Anonymity 5.2.1 Methodology 5.2.2 Visible Learning Effects 5.2.3 Assertion 5.2.4 Experiments 5.2.5 Results 5.2.6 Conclusions 5.3 Learning Demand Assessment 5.3.1 Methodology 5.3.2 Visible Learning Effects 5.3.3 Tool Description 5.3.4 Assertion 5.3.5 Experiments 5.3.6 Results 5.3.7 Conclusions 5.4 Peer Discussion System 5.4.1 Methodology 5.4.2 Visible Learning Effects 5.4.3 Tool Description 5.4.4 Assertion 5.4.5 Experiments 5.4.6 Results 5.4.7 Conclusions 5.5 Virtual Interactive Whiteboard 5.5.1 Methodology 5.5.2 Visible Learning Effects 5.5.3 Tool Description 5.5.4 Assertion 5.5.5 Experiments 5.5.6 Results 5.5.7 Conclusions 5.6 Audience Response System and Emergency Brake 5.6.1 Methodology 5.6.2 Visible Learning Effects 5.6.3 Tool Description 5.6.4 Assertion 5.6.5 Experiments 5.6.6 Results 5.6.7 Conclusions 5.7 Evaluation System 5.7.1 Methodology 5.7.2 Visible Learning Effects 5.7.3 Tool Description 5.7.4 Assertion 5.7.5 Experiments 5.7.6 Results and Conclusion 6 Exam Outcome 7 Utilisation and Motivation 7.1 Prototype Utilisation 7.2 Motivational Aspects Part III: Appraisal 8 Lessons learned 9 Discussion 9.1 Working Theses’ Validity 9.2 Research Community: Impact and Outlook 9.2.1 Significance to Learning Psychology 9.3 Possible Extension of existing Solutions 10 Conclusion 10.1 Summary of scientific Contributions 10.2 Future Work Part IV: Appendix A Experimental Arrangement B Questionnaires B.1 Platform Feedback Sheet B.1.1 Original PFS in 2014 B.1.2 Original PFS in 2015 B.2 Minute Paper B.3 Motivation and Utilisation Questionnaires B.3.1 Motivation 2013 and 2014 B.3.2 Motivation 2015 B.3.3 Utilisation 2014 B.3.4 Utilisation 2015, Rev. I B.3.5 Utilisation 2015, Rev. II C References C.1 Auxiliary Means D Publications D.1 Original Research Contributions D.2 Student Theses E Glossary F Index G Milestones Acknowledgement

    Verification of Graph Programs

    Get PDF
    This thesis is concerned with verifying the correctness of programs written in GP 2 (for Graph Programs), an experimental, nondeterministic graph manipulation language, in which program states are graphs, and computational steps are applications of graph transformation rules. GP 2 allows for visual programming at a high level of abstraction, with the programmer freed from manipulating low-level data structures and instead solving graph-based problems in a direct, declarative, and rule-based way. To verify that a graph program meets some specification, however, has been -- prior to the work described in this thesis -- an ad hoc task, detracting from the appeal of using GP 2 to reason about graph algorithms, high-level system specifications, pointer structures, and the many other practical problems in software engineering and programming languages that can be modelled as graph problems. This thesis describes some contributions towards the challenge of verifying graph programs, in particular, Hoare logics with which correctness specifications can be proven in a syntax-directed and compositional manner. We contribute calculi of proof rules for GP 2 that allow for rigorous reasoning about both partial correctness and termination of graph programs. These are given in an extensional style, i.e. independent of fixed assertion languages. This approach allows for the re-use of proof rules with different assertion languages for graphs, and moreover, allows for properties of the calculi to be inherited: soundness, completeness for termination, and relative completeness (for sufficiently expressive assertion languages). We propose E-conditions as a graphical, intuitive assertion language for expressing properties of graphs -- both about their structure and labelling -- generalising the nested conditions of Habel, Pennemann, and Rensink. We instantiate our calculi with this language, explore the relationship between the decidability of the model checking problem and the existence of effective constructions for the extensional assertions, and fix a subclass of graph programs for which we have both. The calculi are then demonstrated by verifying a number of data- and structure-manipulating programs. We explore the relationship between E-conditions and classical logic, defining translations between the former and a many-sorted predicate logic over graphs; the logic being a potential front end to an implementation of our work in a proof assistant. Finally, we speculate on several avenues of interesting future work; in particular, a possible extension of E-conditions with transitive closure, for proving specifications involving properties about arbitrary-length paths

    Diagrammatic Languages and Formal Verification : A Tool-Based Approach

    Get PDF
    The importance of software correctness has been accentuated as a growing number of safety-critical systems have been developed relying on software operating these systems. One of the more prominent methods targeting the construction of a correct program is formal verification. Formal verification identifies a correct program as a program that satisfies its specification and is free of defects. While in theory formal verification guarantees a correct implementation with respect to the specification, applying formal verification techniques in practice has shown to be difficult and expensive. In response to these challenges, various support methods and tools have been suggested for all phases from program specification to proving the derived verification conditions. This thesis concerns practical verification methods applied to diagrammatic modeling languages. While diagrammatic languages are widely used in communicating system design (e.g., UML) and behavior (e.g., state charts), most formal verification platforms require the specification to be written in a textual specification language or in the mathematical language of an underlying logical framework. One exception is invariant-based programming, in which programs together with their specifications are drawn as invariant diagrams, a type of state transition diagram annotated with intermediate assertions (preconditions, postconditions, invariants). Even though the allowed program states—called situations—are described diagrammatically, the intermediate assertions defining a situation’s meaning in the domain of the program are still written in conventional textual form. To explore the use of diagrams in expressing the intermediate assertions of invariant diagrams, we designed a pictorial language for expressing array properties. We further developed this notation into a diagrammatic domain-specific language (DSL) and implemented it as an extension to the Why3 platform. The DSL supports expression of array properties. The language is based on Reynolds’s interval and partition diagrams and includes a construct for mapping array intervals to logic predicates. Automated verification of a program is attained by generating the verification conditions and proving that they are true. In practice, full proof automation is not possible except for trivial programs and verifying even simple properties can require significant effort both in specification and proof stages. An animation tool which supports run-time evaluation of the program statements and intermediate assertions given any user-defined input can support this process. In particular, an execution trace leading up to a failed assertion constitutes a refutation of a verification condition that requires immediate attention. As an extension to Socos, a verificion tool for invariant diagrams built on top of the PVS proof system, we have developed an execution model where program statements and assertions can be evaluated in a given program state. A program is represented by an abstract datatype encoding the program state, together with a small-step state transition function encoding the evaluation of a single statement. This allows the program’s runtime behavior to be formally inspected during verification. We also implement animation and interactive debugging support for Socos. The thesis also explores visualization of system development in the context of model decomposition in Event-B. Decomposing a software system becomes increasingly critical as the system grows larger, since the workload on the theorem provers must be distributed effectively. Decomposition techniques have been suggested in several verification platforms to split the models into smaller units, each having fewer verification conditions and therefore imposing a lighter load on automatic theorem provers. In this work, we have investigated a refinement-based decomposition technique that makes the development process more resilient to change in specification and allows parallel development of sub-models by a team. As part of the research, we evaluated the technique on a small case study, a simplified version of a landing gear system verification presented by Boniol and Wiels, within the Event-B specification language.Vikten av programvaras korrekthet har accentuerats dĂ„ ett vĂ€xande antal sĂ€kerhetskritiska system, vilka Ă€r beroende av programvaran som styr dessa, har utvecklas. En av de mer framtrĂ€dande metoderna som riktar in sig pĂ„ utveckling av korrekt programvara Ă€r formell verifiering. Inom formell verifiering avses med ett korrekt program ett program som uppfyller sina specifikationer och som Ă€r fritt frĂ„n defekter. Medan formell verifiering teoretiskt sett kan garantera ett korrekt program med avseende pĂ„ specifikationerna, har tillĂ€mpligheten av formella verifieringsmetod visat sig i praktiken vara svĂ„r och dyr. Till svar pĂ„ dessa utmaningar har ett stort antal olika stödmetoder och automatiseringsverktyg föreslagits för samtliga faser frĂ„n specifikationen till bevisningen av de hĂ€rledda korrekthetsvillkoren. Denna avhandling behandlar praktiska verifieringsmetoder applicerade pĂ„ diagrambaserade modelleringssprĂ„k. Medan diagrambaserade sprĂ„k ofta anvĂ€nds för kommunikation av programvarudesign (t.ex. UML) samt beteende (t.ex. tillstĂ„ndsdiagram), krĂ€ver de flesta verifieringsplattformar att specifikationen kodas medelst ett textuellt specifikationsspĂ„k eller i sprĂ„ket hos det underliggande logiska ramverket. Ett undantag Ă€r invariantbaserad programmering, inom vilken ett program tillsammans med dess specifikation ritas upp som sk. invariantdiagram, en typ av tillstĂ„ndstransitionsdiagram annoterade med mellanliggande logiska villkor (förvillkor, eftervillkor, invarianter). Även om de tillĂ„tna programtillstĂ„nden—sk. situationer—beskrivs diagrammatiskt Ă€r de logiska predikaten som beskriver en situations betydelse i programmets domĂ€n fortfarande skriven pĂ„ konventionell textuell form. För att vidare undersöka anvĂ€ndningen av diagram vid beskrivningen av mellanliggande villkor inom invariantbaserad programming, har vi konstruerat ett bildbaserat sprĂ„k för villkor över arrayer. Vi har dĂ€refter vidareutvecklat detta sprĂ„k till ett diagrambaserat domĂ€n-specifikt sprĂ„k (domain-specific language, DSL) och implementerat stöd för det i verifieringsplattformen Why3. SprĂ„ket lĂ„ter anvĂ€ndaren uttrycka egenskaper hos arrayer, och Ă€r baserat pĂ„ Reynolds intevall- och partitionsdiagram samt inbegriper en konstruktion för mappning av array-intervall till logiska predikat. Automatisk verifiering av ett program uppnĂ„s genom generering av korrekthetsvillkor och Ă„tföljande bevisning av dessa. I praktiken kan full automatisering av bevis inte uppnĂ„s utom för trivial program, och Ă€ven bevisning av enkla egenskaper kan krĂ€va betydande anstrĂ€ngningar bĂ„de vid specifikations- och bevisfaserna. Ett animeringsverktyg som stöder exekvering av sĂ„vĂ€l programmets satser som mellanliggande villkor för godtycklig anvĂ€ndarinput kan vara till hjĂ€lp i denna process. SĂ€rskilt ett exekveringspĂ„r som leder upp till ett falskt mellanliggande villkor utgör ett direkt vederlĂ€ggande (refutation) av ett bevisvillkor, vilket krĂ€ver omedelbar uppmĂ€rksamhet frĂ„n programmeraren. Som ett tillĂ€gg till Socos, ett verifieringsverktyg för invariantdiagram baserat pĂ„ bevissystemet PVS, har vi utvecklat en exekveringsmodell dĂ€r programmets satser och villkor kan evalueras i ett givet programtillstĂ„nd. Ett program representeras av en abstrakt datatyp för programmets tillstĂ„nd tillsammans med en small-step transitionsfunktion för evalueringen av en enskild programsats. Detta möjliggör att ett programs exekvering formellt kan analyseras under verifieringen. Vi har ocksĂ„ implementerat animation och interaktiv felsökning i Socos. Avhandlingen undersöker ocksĂ„ visualisering av systemutveckling i samband med modelluppdelning inom Event-B. Uppdelning av en systemmodell blir allt mer kritisk dĂ„ ett systemet vĂ€xer sig större, emedan belastningen pĂ„ underliggande teorembe visare mĂ„ste fördelas effektivt. Uppdelningstekniker har föreslagits inom mĂ„nga olika verifieringsplattformar för att dela in modellerna i mindre enheter, sĂ„ att varje enhet har fĂ€rre verifieringsvillkor och dĂ€rmed innebĂ€r en mindre belastning pĂ„ de automatiska teorembevisarna. I detta arbete har vi undersökt en refinement-baserad uppdelningsteknik som gör utvecklingsprocessen mer kapabel att hantera förĂ€ndringar hos specifikationen och som tillĂ„ter parallell utveckling av delmodellerna inom ett team. Som en del av forskningen har vi utvĂ€rderat tekniken pĂ„ en liten fallstudie: en förenklad modell av automationen hos ett landningsstĂ€ll av Boniol and Wiels, uttryckt i Event-B-specifikationsprĂ„ket

    Verification, slicing, and visualization of programs with contracts

    Get PDF
    Tese de doutoramento em InformĂĄtica (ĂĄrea de especialização em CiĂȘncias da Computação)As a specification carries out relevant information concerning the behaviour of a program, why not explore this fact to slice a program in a semantic sense aiming at optimizing it or easing its verification? It was this idea that Comuzzi, in 1996, introduced with the notion of postcondition-based slicing | slice a program using the information contained in the postcondition (the condition Q that is guaranteed to hold at the exit of a program). After him, several advances were made and different extensions were proposed, bridging the two areas of Program Verification and Program Slicing: specifically precondition-based slicing and specification-based slicing. The work reported in this Ph.D. dissertation explores further relations between these two areas aiming at discovering mutual benefits. A deep study of specification-based slicing has shown that the original algorithm is not efficient and does not produce minimal slices. In this dissertation, traditional specification-based slicing algorithms are revisited and improved (their formalization is proposed under the name of assertion-based slicing), in a new framework that is appropriate for reasoning about imperative programs annotated with contracts and loop invariants. In the same theoretical framework, the semantic slicing algorithms are extended to work at the program level through a new concept called contract based slicing. Contract-based slicing, constituting another contribution of this work, allows for the study of a program at an interprocedural level, enabling optimizations in the context of code reuse. Motivated by the lack of tools to prove that the proposed algorithms work in practice, a tool (GamaSlicer) was also developed. It implements all the existing semantic slicing algorithms, in addition to the ones introduced in this dissertation. This third contribution is based on generic graph visualization and animation algorithms that were adapted to work with verification and slice graphs, two specific cases of labelled control low graphs.Tendo em conta que uma especificação contĂ©m informação relevante no que diz respeito ao comportamento de um programa, faz sentido explorar este facto para o cortar em fatias (slice) com o objectivo de o optimizar ou de facilitar a sua verificação. Foi precisamente esta ideia que Comuzzi introduziu, em 1996, apresentando o conceito de postcondition-based slicing que consiste em cortar um programa usando a informação contida na pos-condicĂŁo (a condição Q que se assegura ser verdadeira no final da execução do programa). Depois da introdução deste conceito, vĂĄrios avanços foram feitos e diferentes extensĂ”es foram propostas, aproximando desta forma duas ĂĄreas que atĂ© entĂŁo pareciam desligadas: Program Verification e Program Slicing. Entre estes conceitos interessa-nos destacar as noçÔes de precondition-based slicing e specification-based slicing, que serĂŁo revisitadas neste trabalho. Um estudo aprofundado do conceito de specification-based slicing relevou que o algoritmo original nĂŁo Ă© eficiente e nĂŁo produz slices mĂ­nimos. O trabalho reportado nesta dissertação de doutoramento explora a ideia de tornar mais prĂłximas essas duas ĂĄreas visando obter benefĂ­cios mĂștuos. Assim, estabelecendo uma nova base teĂłrica matemĂĄtica, os algoritmos originais de specification-based slicing sĂŁo revistos e aperfeiçoados | a sua formalizacĂŁo Ă© proposta com o nome de assertion-based slicing. Ainda sobre a mesma base teĂłrica, os algoritmos de slicing sĂŁo extendidos, de forma a funcionarem ao nĂ­vel do programa; alem disso introduz-se um novo conceito: contract-based slicing. Este conceito, contract-based slicing, sendo mais um dos contributos do trabalho aqui descrito, possibilita o estudo de um programa ao nĂ­vel externo de um procedimento, permitindo, por um lado, otimizaçÔes no contexto do seu uso, e por outro, a sua reutilização segura. Devido Ă  falta de ferramentas que provem que os algoritmos propostos de facto funcionam na prĂĄtica, foi desenvolvida uma, com o nome GamaSlicer, que implementa todos os algoritmos existentes de slicing semĂąntico e os novos propostos. Uma terceira contribuição Ă© baseada nos algoritmos genĂ©ricos de visualização e animação de grafos que foram adaptados para funcionar com os grafos de controlo de fluxo etiquetados e os grafos de verificação e slicing.Fundação para a CiĂȘncia e a Tecnologia (FCT) atravĂ©s da Bolsa de Doutoramento SFRH/BD/33231/2007Projecto RESCUE (contrato FCT sob a referĂȘncia PTDC / EIA / 65862 /2006)Projecto CROSS (contrato FCT sob a referĂȘncia PTDC / EIACCO / 108995 / 2008

    Algebraic Principles for Program Correctness Tools in Isabelle/HOL

    Get PDF
    This thesis puts forward a flexible and principled approach to the development of construction and verification tools for imperative programs, in which the control flow and the data level are cleanly separated. The approach is inspired by algebraic principles and benefits from an algebraic semantics layer. It is programmed in the Isabelle/HOL interactive theorem prover and yields simple lightweight mathematical components as well as program construction and verification tools that are themselves correct by construction. First, a simple tool is implemented using Kleeene algebra with tests (KAT) for the control flow of while-programs, which is the most compact verification formalism for imperative programs, and their standard relational semantics for the data level. A reference formalisation of KAT in Isabelle/HOL is then presented, providing three different formalisations of tests. The structured comprehensive libraries for these algebras include an algebraic account of Hoare logic for partial correctness. Verification condition generation and program construction rules are based on equational reasoning and supported by powerful Isabelle tactics and automated theorem proving. Second, the tool is expanded to support different programming features and verification methods. A basic program construction tool is developed by adding an operation for the specification statement and one single axiom. To include recursive procedures, KATs are expanded further to quantales with tests, where iteration and the specification statement can be defined explicitly. Additionally, a nondeterministic extension supports the verification of simple concurrent programs. Finally, the approach is also applied to separation logic, where the control-flow is modelled by power series with convolution as separating conjunction. A generic construction lifts resource monoids to assertion and predicate transformer quantales. The data level is captured by concrete store-heap models. These are linked to the algebra by soundness proofs. A number of examples shows the tools at work

    Enhancements to jml and its extended static checking technology

    Get PDF
    Formal methods are useful for developing high-quality software, but to make use of them, easy-to-use tools must be available. This thesis presents our work on the Java Modeling Language (JML) and its static verification tools. A main contribution is Offline User-Assisted Extended Static Checking (OUA-ESC), which is positioned between the traditional, fully automatic ESC and interactive Full Static Program Verification (FSPV). With OUA-ESC, automated theorem provers are used to discharge as many Verification Conditions (VCs) as possible, then users are allowed to provide Isabelle/HOL proofs for the sub-VCs that cannot be discharged automatically. Thus, users are able to take advantage of the full power of Isabelle/HOL to manually prove the system correct, if they so choose. Exploring unproven sub-VCs with Isabelle's ProofGeneral has also proven very useful for debugging code and their specifications. We also present syntax and semantics for monotonic non-null references, a common category that has not been previously identified. This monotonic non-null modifier allows some fields previously declared as nullable to be treated like local variables for nullity flow analysis. To support this work, we developed JML4, an Eclipse-based Integration Verification Environment (IVE) for the Java Modeling Language. JML4 provides integration of JML into all of the phases of the Eclipse JDT's Java compiler, makes use of external API specifications, and provides native error reporting. The verification techniques initially supported include a Non-Null Type System (NNTS), Runtime Assertion Checking (RAC), and Extended Static Checking (ESC); and verification tools to be developed by other researchers can be incorporated. JML4 was adopted by the JML4 community as the platform for their combined research efforts. ESC4, JML4's ESC component, provides other novel features not found before in ESC tools. Multiple provers are used automatically, which provides a greater coverage of language constructs that can be verified. Multi-threaded generation and distributed discharging of VCs, as well as a proof-status caching strategy, greatly speed up this CPU-intensive verification technique. VC caches are known to be fragile, and we developed a simple way to remove some of that fragility. These features combine to form the first IVE for JML, which will hopefully bring the improved quality promised by formal methods to Java developer

    Potential of One-to-One Technology Uses and Pedagogical Practices: Student Agency and Participation in an Economically Disadvantaged Eighth Grade

    Get PDF
    The accelerated growth of 1:1 educational computing initiatives has challenged digital equity with a three-tiered, socioeconomic digital divide: (a) access, (b) higher order uses, and (c) user empowerment and personalization. As the access gap has been closing, the exponential increase of 1:1 devices threatens to widen the second and third digital divides. Using critical theory, specifically, critical theory of technology and critical pedagogy, and a qualitative case study design, this research explored the experiences of a middle school categorized under California criteria as “socioeconomically disadvantaged.” This study contributes to critical theory on technology within an educational setting, as well as provides voice to the experiences of teachers and students with economic disadvantages experiencing the phenomena of 1:1 computing. Using observational, interview, and school document data, this study asked the question: To what extent do 1:1 technology integration uses and associated pedagogical practices foster Margins of Maneuver in an eighth grade comprised of a student population that is predominantly economically disadvantaged? Probing two key markers of Margins of Maneuver, student agency and participation, the study found: (a) a technology-enhanced learning culture; (b) a teacher shift to facilitator roles; (c) instances of engaged, experiential, and inquiry learning and higher order technology uses; (d) in-progress efforts to strengthen student voice and self-identity. Accompanying the progress in narrowing economically based digital divides, the data also demonstrated some tension with the knowledge economy. Nevertheless, sufficient margins existed, associated with one-to-one uses and practices, to result in micro-resistances characterized by assertion of student agency and democratization potential

    A Quality Framework for Software Development (QFSD)

    Get PDF
    INTRODUCTION. This research delivers a new complete and prescriptive software development framework, known as the Quality Framework for Software Development (QFSD) for immediate use by software development practitioners. Whilst there are a number of existing methodologies available, and many software development standards they fail to address the complete development lifecycle. A review of current literature supports this assertion. AIMS AND OBJECTIVES. The overall aim of the research is to create a new software development framework, applying it to a substantial number of real-world software projects in two different industrial software development environments and thereby demonstrating its effectiveness. METHODS. Based on a review of the available research approaches and strategies, the researcher selected 'pragmatism' as the most suitable for this research. This selection was driven by two contributory factors. The first was that in order to conduct the research the researcher would have active participation in the majority of the research activities. The second was that the deliverables from the research should be immediately useable for the benefit of software practitioners and hence not be regarded as a theoretical framework. The approach was further refined by adopting Action Research and Case Study strategies. The research was divided in to stages each of which was executed within separate companies. The companies were very different in terms of their business areas, culture and views on quality and specifically quality of software deliverables. RESULTS. The research findings provided a strong indication that a holistic software development framework does provide an improvement in software project deliverables quality and repeatability in terms of schedules and quality. In the case of Fisher–Rosemount it enabled them to attain ISO 9000/Ticket accreditation. In addition, by providing all processes and tools in a single web based environment the adoption by software developers, project managers and senior management was very high
    corecore