295 research outputs found

    SeSFJava: A Framework for Design and Assertion-Testing Of Concurrent Systems

    Get PDF
    Many elegant formalisms have been developed for specifying and reasoning about concurrent systems. However, these formalisms have not been widely used by developers and programmers of concurrent systems. One reason is that most formal methods involve techniques and tools not familiar to programmers, for example, a specification language very different from C, C++ or Java. SeSF is a framework for design, verification and testing of concurrent systems that attempts to address these concerns by keeping the theory close to the programmer's world. SeSF considers "layered compositionality". Here, a composite system consists of layers of component systems, and "services" define the allowed sequences of interactions between layers. SeSF uses conventional programming languages to define services. Specifically, SeSF is a markup language that can be integrated with any programming language. We have integrated SeSF into Java, resulting in what we call SeSFJava. We developed a testing harness for SeSFJava, called SeSFJava Harness, in which a (distributed) SeSFJava program can be executed, and the execution checked against its service and any other correctness assertion. A key capability of the SeSFJava Harness is that one can test the final implementation of a concurrent system, rather than just an abstract representation of it. We have two major applications of SeSFJava and the Harness. The first is to the TCP transport layer, where service specification is cast in SeSFJava and the system is tested under SeSFJava Harness. The second is to a Gnutella network. We define the intended services of Gnutella -- which was not done before to the best of our knowledge -- and we tested an open-source implementation, namely Furi, against the service

    Enhancing System Realisation in Formal Model Development

    Get PDF
    Software for mission-critical systems is sometimes analysed using formal specification to increase the chances of the system behaving as intended. When sufficient insights into the system have been obtained from the formal analysis, the formal specification is realised in the form of a software implementation. One way to realise the system's software is by automatically generating it from the formal specification -- a technique referred to as code generation. However, in general it is difficult to make guarantees about the correctness of the generated code -- especially while requiring automation of the steps involved in realising the formal specification. This PhD dissertation investigates ways to improve the automation of the steps involved in realising and validating a system based on a formal specification. The approach aims to develop properly designed software tools which support the integration of formal methods tools into the software development life cycle, and which leverage the formal specification in the subsequent validation of the system. The tools developed use a new code generation infrastructure that has been built as part of this PhD project and implemented in the Overture tool -- a formal methods tool that supports the Vienna Development Method. The development of the code generation infrastructure has involved the re-design of the software architecture of Overture. The new architecture brings forth the reuse and extensibility features of Overture to take into account the needs and requirements of software extensions targeting Overture. The tools developed in this PhD project have successfully supported three case studies from externally funded projects. The feedback received from the case study work has further helped improve the code generation infrastructure and the tools built using it

    Formal Specification and Runtime Verification of Parallel Systems using Interval Temporal Logic (ITL)

    Get PDF
    Runtime Verification (RV) is the discipline that allows monitoring systems at runtime in order to check the satisfaction or violation of a given correctness property. Parallel systems are more complicated than sequential systems. Therefore, systems that run in parallel need a parallel runtime verification framework to monitor their behaviour and guarantee correctness properties. Parallel systems have correctness properties different from correctness properties of sequential systems. For instance, as a correctness property of parallel systems, absence of deadlock has to be guaranteed and mutual exclusion mechanism has to be applied in case a resource is shared between more than one system and the parallelism form is true concurrency. Therefore, sequential runtime verification framework can not handle systems that run in parallel due to the singularity issue of this kind of framework as they are built to handle a single system at a time, whereas for parallel systems a framework has to handle many systems at a time. AnaTempura is a runtime verification tool which can handle single systems at a time. To solve this problem, I evolved AnaTempura to be able to handle parallel systems. In this thesis, I propose a Parallel Runtime Verification Framework (PRVF) that can handle systems which use architectures of parallelism in their design such as multi-core processor architecture. The proposed model can check system behaviour at runtime in order to either guarantee satisfaction or detect violations of correctness properties. My technique is based on Interval Temporal Logic (ITL) and its executable subset Tempura to verify properties at runtime using the AnaTempura tool. I use, as a demonstration, the case study of private L2 cache memory of multi-core processor architecture. My objectives are to i) design MSI protocol compliant with cache memory coherence and ii) fulfil main memory consistency model at runtime. I achieve this via a formal Tempura specification of the cache controller which is then verified at runtime against my objectives for memory consistency and cache coherence using AnaTempura. The presented specifications allow to extend it allow to extend it to not only capture correctness but also monitor the performance of a cache memory controller. The case study is then evaluated via integrating AnaTempura with MATLAB in order to check correctness properties such as memory consistency and cache coherence

    Software engineering perspectives on physiological computing

    Get PDF
    Physiological computing is an interesting and promising concept to widen the communication channel between the (human) users and computers, thus allowing an increase of software systems' contextual awareness and rendering software systems smarter than they are today. Using physiological inputs in pervasive computing systems allows re-balancing the information asymmetry between the human user and the computer system: while pervasive computing systems are well able to flood the user with information and sensory input (such as sounds, lights, and visual animations), users only have a very narrow input channel to computing systems; most of the time, restricted to keyboards, mouse, touchscreens, accelerometers and GPS receivers (through smartphone usage, e.g.). Interestingly, this information asymmetry often forces the user to subdue to the quirks of the computing system to achieve his goals -- for example, users may have to provide information the software system demands through a narrow, time-consuming input mode that the system could sense implicitly from the human body. Physiological computing is a way to circumvent these limitations; however, systematic means for developing and moulding physiological computing applications into software are still unknown. This thesis proposes a methodological approach to the creation of physiological computing applications that makes use of component-based software engineering. Components help imposing a clear structure on software systems in general, and can thus be used for physiological computing systems as well. As an additional bonus, using components allow physiological computing systems to leverage reconfigurations as a means to control and adapt their own behaviours. This adaptation can be used to adjust the behaviour both to the human and to the available computing environment in terms of resources and available devices - an activity that is crucial for complex physiological computing systems. With the help of components and reconfigurations, it is possible to structure the functionality of physiological computing applications in a way that makes them manageable and extensible, thus allowing a stepwise and systematic extension of a system's intelligence. Using reconfigurations entails a larger issue, however. Understanding and fully capturing the behaviour of a system under reconfiguration is challenging, as the system may change its structure in ways that are difficult to fully predict. Therefore, this thesis also introduces a means for formal verification of reconfigurations based on assume-guarantee contracts. With the proposed assume-guarantee contract framework, it is possible to prove that a given system design (including component behaviours and reconfiguration specifications) is satisfying real-time properties expressed as assume-guarantee contracts using a variant of real-time linear temporal logic introduced in this thesis - metric interval temporal logic for reconfigurable systems. Finally, this thesis embeds both the practical approach to the realisation of physiological computing systems and formal verification of reconfigurations into Scrum, a modern and agile software development methodology. The surrounding methodological approach is intended to provide a frame for the systematic development of physiological computing systems from first psychological findings to a working software system with both satisfactory functionality and software quality aspects. By integrating practical and theoretical aspects of software engineering into a self-contained development methodology, this thesis proposes a roadmap and guidelines for the creation of new physiological computing applications.Physiologisches Rechnen ist ein interessantes und vielversprechendes Konzept zur Erweiterung des Kommunikationskanals zwischen (menschlichen) Nutzern und Rechnern, und dadurch die Berücksichtigung des Nutzerkontexts in Software-Systemen zu verbessern und damit Software-Systeme intelligenter zu gestalten, als sie es heute sind. Physiologische Eingangssignale in ubiquitären Rechensystemen zu verwenden, ermöglicht eine Neujustierung der Informationsasymmetrie, die heute zwischen Menschen und Rechensystemen existiert: Während ubiquitäre Rechensysteme sehr wohl in der Lage sind, den Menschen mit Informationen und sensorischen Reizen zu überfluten (z.B. durch Töne, Licht und visuelle Animationen), hat der Mensch nur sehr begrenzte Einflussmöglichkeiten zu Rechensystemen. Meistens stehen nur Tastaturen, die Maus, berührungsempfindliche Bildschirme, Beschleunigungsmesser und GPS-Empfänger (zum Beispiel durch Mobiltelefone oder digitale Assistenten) zur Verfügung. Diese Informationsasymmetrie zwingt die Benutzer zur Unterwerfung unter die Usancen der Rechensysteme, um ihre Ziele zu erreichen - zum Beispiel müssen Nutzer Daten manuell eingeben, die auch aus Sensordaten des menschlichen Körpers auf unauffällige weise erhoben werden können. Physiologisches Rechnen ist eine Möglichkeit, diese Beschränkung zu umgehen. Allerdings fehlt eine systematische Methodik für die Entwicklung physiologischer Rechensysteme bis zu fertiger Software. Diese Dissertation präsentiert einen methodischen Ansatz zur Entwicklung physiologischer Rechenanwendungen, der auf der komponentenbasierten Softwareentwicklung aufbaut. Der komponentenbasierte Ansatz hilft im Allgemeinen dabei, eine klare Architektur des Software-Systems zu definieren, und kann deshalb auch für physiologische Rechensysteme angewendet werden. Als zusätzlichen Vorteil erlaubt die Komponentenorientierung in physiologischen Rechensystemen, Rekonfigurationen als Mittel zur Kontrolle und Anpassung des Verhaltens von physiologischen Rechensystemen zu verwenden. Diese Adaptionstechnik kann genutzt werden um das Verhalten von physiologischen Rechensystemen an den Benutzer anzupassen, sowie an die verfügbare Recheninfrastruktur im Sinne von Systemressourcen und Geräten - eine Maßnahme, die in komplexen physiologischen Rechensystemen entscheidend ist. Mit Hilfe der Komponentenorientierung und von Rekonfigurationen wird es möglich, die Funktionalität von physiologischen Rechensystemen so zu strukturieren, dass das System wartbar und erweiterbar bleibt. Dadurch wird eine schrittweise und systematische Erweiterung der Funktionalität des Systems möglich. Die Verwendung von Rekonfigurationen birgt allerdings Probleme. Das Systemverhalten eines Software-Systems, das Rekonfigurationen unterworfen ist zu verstehen und vollständig einzufangen ist herausfordernd, da das System seine Struktur auf schwer vorhersehbare Weise verändern kann. Aus diesem Grund führt diese Arbeit eine Methode zur formalen Verifikation von Rekonfigurationen auf Grundlage von Annahme-Zusicherungs-Verträgen ein. Mit dem vorgeschlagenen Annahme-Zusicherungs-Vertragssystem ist es möglich zu beweisen, dass ein gegebener Systementwurf (mitsamt Komponentenverhalten und Spezifikation des Rekonfigurationsverhaltens) eine als Annahme-Zusicherungs-Vertrag spezifizierte Echtzeiteigenschaft erfüllt. Für die Spezifikation von Echtzeiteigenschaften kann eine Variante von linearer Temporallogik für Echtzeit verwendet werden, die in dieser Arbeit eingeführt wird: Die metrische Intervall-Temporallogik für rekonfigurierbare Systeme. Schließlich wird in dieser Arbeit sowohl ein praktischer Ansatz zur Realisierung von physiologischen Rechensystemen als auch die formale Verifikation von Rekonfigurationen in Scrum eingebettet, einer modernen und agilen Softwareentwicklungsmethodik. Der methodische Ansatz bietet einen Rahmen für die systematische Entwicklung physiologischer Rechensysteme von Erkenntnissen zur menschlichen Physiologie hin zu funktionierenden physiologischen Softwaresystemen mit zufriedenstellenden funktionalen und qualitativen Eigenschaften. Durch die Integration sowohl von praktischen wie auch theoretischen Aspekten der Softwaretechnik in eine vollständige Entwicklungsmethodik bietet diese Arbeit einen Fahrplan und Richtlinien für die Erstellung neuer physiologischer Rechenanwendungen

    Towards quality programming in the automated testing of distributed applications

    Get PDF
    PhD ThesisSoftware testing is a very time-consuming and tedious activity and accounts for over 25% of the cost of software development. In addition to its high cost, manual testing is unpopular and often inconsistently executed. Software Testing Environments (STEs) overcome the deficiencies of manual testing through automating the test process and integrating testing tools to support a wide range of test capabilities. Most prior work on testing is in single-thread applications. This thesis is a contribution to testing of distributed applications, which has not been well explored. To address two crucial issues in testing, when to stop testing and how good the software is after testing, a statistics-based integrated test environment which is an extension of the testing concept in Quality Programming for distributed applications is presented. It provides automatic support for test execution by the Test Driver, test development by the SMAD Tree Editor and the Test Data Generator, test failure analysis by the Test Results Validator and the Test Paths Tracer, test measurement by the Quality Analyst, test management by the Test Manager and test planning by the Modeller. These tools are integrated around a public, shared data model describing the data entities and relationships which are manipulable by these tools. It enables early entry of the test process into the life cycle due to the definition of the quality planning and message-flow routings in the modelling. After well-prepared modelling and requirements specification are undertaken, the test process and the software design and implementation can proceed concurrently. A simple banking application written using Java Remote Method Invocation (RMI) and Java DataBase Connectivity (JDBC) shows the testing process of fitting it into the integrated test environment. The concept of the automated test execution through mobile agents across multiple platforms is also illustrated on this 3-tier client/server application.The National Science Council, Taiwan: The Ministry of National Defense, Taiwan

    Argos container, core and extension framework

    Get PDF
    With the emergence of the internet and e-commerce in the 90’s new common problems arose when developing applications that span the internet. These common problems include among others scalability, robustness, networking, database usage and heterogeneity. Software developers creating internet applications saw themselves reinventing the wheel repeatedly. This lead to the creation of middleware systems that aimed to solve these common problems. This thesis will present Argos which uses a different way of building middleware systems. Argos is able to provide tailored, flexible and extensible middleware support using reflection, dependency injection, Java Management Extensions (JMX) notifications and hot deployment. The result is a platform capable of tackling domain specific challenges. It provides rapid development of feature rich applications for managing and processing information. Argos has gone through thorough testing proving production stability

    Monitoring-Oriented Programming: A Tool-Supported Methodology for Higher Quality Object-Oriented Software

    Get PDF
    This paper presents a tool-supported methodological paradigm for object-oriented software development, called monitoring-oriented programming and abbreviated MOP, in which runtime monitoring is a basic software design principle. The general idea underlying MOP is that software developers insert specifications in their code via annotations. Actual monitoring code is automatically synthesized from these annotations before compilation and integrated at appropriate places in the program, according to user-defined configuration attributes. This way, the specification is checked at runtime against the implementation. Moreover, violations and/or validations of specifications can trigger user-defined code at any points in the program, in particular recovery code, outputting or sending messages, or raising exceptions. The MOP paradigm does not promote or enforce any specific formalism to specify requirements: it allows the users to plug-in their favorite or domain-specific specification formalisms via logic plug-in modules. There are two major technical challenges that MOP supporting tools unavoidably face: monitor synthesis and monitor integration. The former is heavily dependent on the specification formalism and comes as part of the corresponding logic plug-in, while the latter is uniform for all specification formalisms and depends only on the target programming language. An experimental prototype tool, called Java-MOP, is also discussed, which currently supports most but not all of the desired MOP features. MOP aims at reducing the gap between formal specification and implementation, by integrating the two and allowing them together to form a system

    Context-aware multi-factor authentication

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaAuthentication systems, as available today, are inappropriate for the requirements of ubiquitous, heterogeneous and large scale distributed systems. Some important limitations are: (i) the use of weak or rigid authentication factors as principal’s identity proofs, (ii) non flexibility to combine different authentication modes for dynamic and context-aware interaction criteria, (iii) not being extensible models to integrate new or emergent pervasive authentication factors and (iv) difficulty to manage the coexistence of multi-factor authentication proofs in a unified single sign-on solution. The objective of this dissertation is the design, implementation and experimental evaluation of a platform supporting multi-factor authentication services, as a contribution to overcome the above limitations. The devised platform will provide a uniform and flexible authentication base for multi-factor authentication requirements and context-aware authentication modes for ubiquitous applications and services. The main contribution is focused on the design and implementation of an extensible authentication framework model, integrating classic as well as new pervasive authentication factors that can be composed for different context-aware dynamic requirements. Flexibility criteria are addressed by the establishment of a unified authentication back-end, supporting authentication modes as defined processes and rules expressed in a SAML based declarative markup language. The authentication base supports an extended single sign-on system that can be dynamically tailored for multi-factor authentication policies, considering large scale distributed applications and according with ubiquitous interaction needs
    • …
    corecore