2,975 research outputs found

    Correctness of services and their composition

    Get PDF
    We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC.Wir studieren die Korrektheit von Services und Servicekompositionen und untersuchen, wie der Entwurf von korrekten Servicekompositionen systematisch unterstützt werden kann. Wir legen dabei den Fokus auf das Kommunikationsprotokoll der Services. Mithilfe von formalen Methoden tragen wir zu drei Szenarien von SOC bei

    Correctness of services and their composition

    Get PDF
    We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC.Wir studieren die Korrektheit von Services und Servicekompositionen und untersuchen, wie der Entwurf von korrekten Servicekompositionen systematisch unterstützt werden kann. Wir legen dabei den Fokus auf das Kommunikationsprotokoll der Services. Mithilfe von formalen Methoden tragen wir zu drei Szenarien von SOC bei

    Hierarchical Syntactic Models for Human Activity Recognition through Mobility Traces

    Get PDF
    Recognizing users’ daily life activities without disrupting their lifestyle is a key functionality to enable a broad variety of advanced services for a Smart City, from energy-efficient management of urban spaces to mobility optimization. In this paper, we propose a novel method for human activity recognition from a collection of outdoor mobility traces acquired through wearable devices. Our method exploits the regularities naturally present in human mobility patterns to construct syntactic models in the form of finite state automata, thanks to an approach known as grammatical inference. We also introduce a measure of similarity that accounts for the intrinsic hierarchical nature of such models, and allows to identify the common traits in the paths induced by different activities at various granularity levels. Our method has been validated on a dataset of real traces representing movements of users in a large metropolitan area. The experimental results show the effectiveness of our similarity measure to correctly identify a set of common coarse-grained activities, as well as their refinement at a finer level of granularity

    Merlin: A Language for Provisioning Network Resources

    Full text link
    This paper presents Merlin, a new framework for managing resources in software-defined networks. With Merlin, administrators express high-level policies using programs in a declarative language. The language includes logical predicates to identify sets of packets, regular expressions to encode forwarding paths, and arithmetic formulas to specify bandwidth constraints. The Merlin compiler uses a combination of advanced techniques to translate these policies into code that can be executed on network elements including a constraint solver that allocates bandwidth using parameterizable heuristics. To facilitate dynamic adaptation, Merlin provides mechanisms for delegating control of sub-policies and for verifying that modifications made to sub-policies do not violate global constraints. Experiments demonstrate the expressiveness and scalability of Merlin on real-world topologies and applications. Overall, Merlin simplifies network administration by providing high-level abstractions for specifying network policies and scalable infrastructure for enforcing them

    Modeling Software Components Using Behavior Protocols

    Get PDF
    This thesis proposes a novel approach for a description of a software component's behavior. The behavior is specified by using behavior protocols - a notation similar to regular expressions, which is easy to read and comprehend

    University of Helsinki Department of Computer Science Annual Report 1998

    Get PDF

    Simplicity-oriented lifelong learning of web applications

    Get PDF
    Nowadays, web applications are ubiquitous. Entire business models revolve around making their services available over the Internet, anytime, anywhere in the world. Due to today’s rapid development practices, software changes are released faster than ever before, creating the risk of losing control over the quality of the delivered products. To counter this, appropriate testing methodologies must be deeply integrated into each phase of the development cycle to identify potential defects as early as possible and to ensure that the product operates as expected in production. The use of low- and no-code tools and code generation technologies can drastically reduce the implementation effort by using well-tailored (graphical) Domain-Specific Languages (DSLs) to focus on what is important: the product. DSLs and corresponding Integrated Modeling Environments (IMEs) are a key enabler for quality control because many system properties can already be verified at a pre-product level. However, to verify that the product fulfills given functional requirements at runtime, end-to-end testing is still a necessity. This dissertation describes the implementation of a lifelong learning framework for the continuous quality control of web applications. In this framework, models representing user-level behavior are mined from running systems using active automata learning, and system properties are verified using model checking. All this is achieved in a continuous and fully automated manner. Code changes trigger testing, learning, and verification processes which generate feedback that can be used for model refinement or product improvement. The main focus of this framework is simplicity. On the one hand, it allows Quality Assurance (QA) engineers to apply learning-based testing techniques to web applications with minimal effort, even without writing code; on the other hand, it allows automation engineers to easily implement these techniques in modern development workflows driven by Continuous Integration and Continuous Deployment (CI/CD). The effectiveness of this framework is leveraged by the Language-Driven Engineering (LDE) approach to web development. Key to this is the text-based DSL iHTML, which enables the instrumentation of user interfaces to make web applications learnable by design, i.e., they adhere to practices that allow fully automated inference of behavioral models without prior specification of an input alphabet. By designing code generators to generate instrumented web-based products, the effort for quality control in the LDE ecosystem is minimized and reduced to formulating runtime properties in temporal logic and verifying them against learned models

    Witness-based validation of verification results with applications to software-model checking

    Get PDF
    In the scientific world, formal verification is an established engineering technique to ensure the correctness of hardware and software systems. Because formal verification is an arduous and error-prone endeavor, automated solutions are desirable, and researchers continue to develop new algorithms and optimize existing ones to push the boundaries of what can be verified automatically. These efforts do not go unnoticed by the industry. Hardware-circuit designs, flight-control systems, and operating-system drivers are just a few examples of systems where formal verification is already part of the quality-assurance repertoire. Nevertheless, the primary fields of application for formal verification are mainly those where errors carry a high risk of significant damage, either financial or physical, because the costs of formal verification are considered to be too high for most other projects, despite the fact that the research community has made vast advancements regarding the effectiveness and efficiency of formal verification techniques in the last decades. We present and address two potential reasons for this discrepancy that we identified in the field of automated formal software verification. (1) Even for experts in the field, it is often difficult to decide which of the multitude of available techniques is the most suitable solution they should recommend to solve a given verification problem. Moreover, even if a suitable solution is found for a given system, there is no guarantee that the solution is sustainable as the system evolves. Consequently, the cost of finding and maintaining a suitable approach for applying formal software verification to real-world systems is high. (2) Even assuming that a suitable and maintainable solution for applying formal software verification to a given system is found and verification results could be obtained, developers of the system still require further guidance towards making practical use of these results, which often differ significantly from the results they obtain from classical quality-assurance techniques they are familiar with, such as testing. To mitigate the first issue, using the open-source software-verification framework CPAchecker, we investigate several popular formal software-verification techniques such as predicate abstraction, Impact, bounded model checking, k -induction, and PDR, and perform an extensive and rigorous experimental study to identify their strengths and weaknesses regarding their comparative effectiveness and efficiency when applied to a large and established benchmark set, to provide a basis for choosing the best technique for a given problem. To mitigate the second issue, we propose a concrete standard format for the representation and communication of verification results that raises the bar from plain "yes" or "no" answers to verification witnesses, which are valuable artifacts of the verification process that contain detailed information discovered during the analysis. We then use these verification witnesses for several applications: To increase the trust in verification results, we irst develop several independent validators based on violation witnesses, i.e. verification witnesses that represent bugs detected by a verifier. We then extend our validators to also erify the verification results obtained from a successful verification, which are represented y correctness witnesses. Lastly, we also develop an interactive web service to store and retrieve these verification witnesses, to provide online validation to quickly de-prioritize likely wrong results, and to graphically visualize the witnesses, as an example of how verification can be integrated into a development process. Since the introduction of our proposed standard format for verification witnesses, it has been adopted by over thirty different software verifiers, and our witness-based result-validation tools have become a core component in the scoring process of the International Competition on Software Verification.In der Welt der Wissenschaft gilt die Formale Verifikation als etablierte Methode, die Korrektheit von Hard- und Software zu gewährleisten. Da die Anwendung formaler Verifikation jedoch selbst ein beschwerliches und fehlerträchtiges Unterfangen darstellt, ist es erstrebenswert, automatisierte Lösungen dafür zu finden. Forscher entwickeln daher immer wieder neue Algorithmen Formaler Verifikation oder verbessern bereits existierende Algorithmen, um die Grenzen der Automatisierbarkeit Formaler Verifikation weiter und weiter zu dehnen. Auch die Industrie ist bereits auf diese Anstrengungen aufmerksam geworden. Flugsteuerungssysteme, Betriebssystemtreiber und Entwürfe von Hardware-Schaltungen sind nur einzelne Beispiele von Systemen, bei denen Formale Verifikation bereits heute einen festen Stammplatz im Arsenal der Qualitätssicherungsmaßnahmen eingenommen hat. Trotz alledem bleiben die primären Einsatzgebiete Formaler Verifikation jene, in denen Fehler ein hohes Risiko finanzieller oder physischer Schäden bergen, da in anderen Projekten die Kosten des Einsatzes Formaler Verifikation in der Regel als zu hoch empfunden werden, unbeachtet der Tatsache, dass es der Forschungsgemeinschaft in den letzten Jahrzehnten gelungen ist, enorme Fortschritte bei der Verbesserung der Effektivität und Effizienz Formaler Verifikationstechniken zu machen. Wir präsentieren und diskutieren zwei potenzielle Ursachen für diese Diskrepanz zwischen Forschung und Industrie, die wir auf dem Gebiet der Automatisierten Formalen Softwareverifikation identifiziert haben. (1) Sogar Fachleuten fällt es oft schwer, zu entscheiden, welche der zahlreichen verfügbaren Methoden sie als vielversprechendste Lösung eines gegebenen Verifikationsproblems empfehlen sollten. Darüber hinaus gibt es selbst dann, wenn eine passende Lösung für ein gegebenes System gefunden wird, keine Garantie, dass sich diese Lösung im Laufe der Evolution des Systems als Nachhaltig erweisen wird. Daher sind sowohl die Wahl als auch der Unterhalt eines passenden Ansatzes zur Anwendung Formaler Softwareverifikation auf reale Systeme kostspielige Unterfangen. (2) Selbst unter der Annahme, dass eine passende und wartbare Lösung zur Anwendung Formaler Softwareverifikation auf ein gegebenes System gefunden und Verifikationsergebnisse erzielt werden, benötigen die Entwickler des Systems immer noch weitere Unterstützung, um einen praktischen Nutzen aus den Ergebnissen ziehen zu können, die sich oft maßgeblich unterscheiden von den Ergebnissen jener klassischen Qualitätssicherungssysteme, mit denen sie vertraut sind, wie beispielsweise dem Testen. Um das erste Problem zu entschärfen, untersuchen wir unter Verwendung des Open-Source-Softwareverifikationsystems CPAchecker mehrere beliebte Formale Softwareverifikationsmethoden, wie beispielsweise Prädikatenabstraktion, Impact, Bounded-Model-Checking, k-Induktion und PDR, und führen umfangreiche und gründliche experimentelle Studien auf einem großen und etablierten Konvolut an Beispielprogrammen durch, um die Stärken und Schwächen dieser Methoden hinsichtlich ihrer relativen Effektivität und Effizienz zu ermitteln und daraus eine Entscheidungsgrundlage für die Wahl der besten Lösung für ein gegebenes Problem abzuleiten. Um das zweite Problem zu entschärfen, schlagen wir ein konkretes Standardformat zur Modellierung und zum Austausch von Verifikationsergebnissen vor, welches die Ansprüche an Verifikationsergebnisse anhebt, weg von einfachen "ja/nein"-Antworten und hin zu Verifikationszeugen (Verification Witnesses), bei denen es sich um wertvolle Produkte des Verifikationsprozesses handelt und die detaillierte, während der Analyse entdeckte Informationen enthalten. Wir stellen mehrere Anwendungsbeispiele für diese Verifikationszeugen vor: Um das Vertrauen in Verifikationsergebnisse zu erhöhen, entwickeln wir zunächst mehrere, voneinander unabhängige Validatoren, die Verletzungszeugen (Violation Witnesses) verwenden, also Verifikationszeugen, welche von einem Verifikationswerkzeug gefundene Spezifikationsverletzungen darstellen, Diese Validatoren erweitern wir anschließend so, dass sie auch in der Lage sind, die Verifikationsergebnisse erfolgreicher Verifikationen, also Korrektheitsbehauptungen, die durch Korrektheitszeugen (Correctness Witnesses) dokumentiert werden, nachzuvollziehen. Schlussendlich entwickeln wir als Beispiel für die Integrierbarkeit Formaler Verifikation in den Entwicklungsprozess einen interaktiven Webservice für die Speicherung und den Abruf von Verifikationzeugen, um einen Online-Validierungsdienst zur schnellen Depriorisierung mutmaßlich falscher Verifikationsergebnisse anzubieten und Verifikationszeugen graphisch darzustellen. Unser Vorschlag für ein Standardformat für Verifikationszeugen wurde inzwischen von mehr als dreißig verschiedenen Softwareverifikationswerkzeugen übernommen und unsere zeugen-basierten Validierungswerkzeuge sind zu einer Kernkomponente des Bewertungsschemas des Internationalen Softwareverifikationswettbewerbs geworden

    Workflow Similarity Analysis

    Get PDF
    As distributed systems have emerged they become widely adopted by scientific communities and various commercial vendors. Some of them settled architectural models that today are broadly known as grid computing and service oriented architecture. Both of them are base on service and workflow paradigms as a single shareable unit of work. Unfortunately, an increasing number of workflows being brought out has raised a problem of their distribution and management, e.g., search repositories and find workflows similar to the given one in order to increase the efficiency of calculations. In this paper a similar workflow search algorithm based on semantic type comparison has been proposed. In order to evaluate the algorithm usability and precision, an experiment has been conducted that regarded workflow extraction from Feta repository. The entire process involved reasoning based on myGrid ontologies. The received results were compared to results obtained from other algorithm, based on an analysis of the names of the workflow components using the TD-IDF weight. The described experiment shows that semantics and ontology play significant role in service and workflow representation
    corecore