59 research outputs found

    Portability of Process-Aware and Service-Oriented Software: Evidence and Metrics

    Get PDF
    Modern software systems are becoming increasingly integrated and are required to operate over organizational boundaries through networks. The development of such distributed software systems has been shaped by the orthogonal trends of service-orientation and process-awareness. These trends put an emphasis on technological neutrality, loose coupling, independence from the execution platform, and location transparency. Execution platforms supporting these trends provide context and cross-cutting functionality to applications and are referred to as engines. Applications and engines interface via language standards. The engine implements a standard. If an application is implemented in conformance to this standard, it can be executed on the engine. A primary motivation for the usage of standards is the portability of applications. Portability, the ability to move software among different execution platforms without the necessity for full or partial reengineering, protects from vendor lock-in and enables application migration to newer engines. The arrival of cloud computing has made it easy to provision new and scalable execution platforms. To enable easy platform changes, existing international standards for implementing service-oriented and process-aware software name the portability of standardized artifacts as an important goal. Moreover, they provide platform-independent serialization formats that enable the portable implementation of applications. Nevertheless, practice shows that service-oriented and process-aware applications today are limited with respect to their portability. The reason for this is that engines rarely implement a complete standard, but leave out parts or differ in the interpretation of the standard. As a consequence, even applications that claim to be portable by conforming to a standard might not be so. This thesis contributes to the development of portable service-oriented and process-aware software in two ways: Firstly, it provides evidence for the existence of portability issues and the insufficiency of standards for guaranteeing software portability. Secondly, it derives and validates a novel measurement framework for quantifying portability. We present a methodology for benchmarking the conformance of engines to a language standard and implement it in a fully automated benchmarking tool. Several test suites of conformance tests for two different languages, the Web Services Business Process Execution Language 2.0 and the Business Process Model and Notation 2.0, allow to uncover a variety of standard conformance issues in existing engines. This provides evidence that the standard-based portability of applications is a real issue. Based on these results, this thesis derives a measurement framework for portability. The framework is aligned to the ISO/IEC Systems and software Quality Requirements and Evaluation method, the recent revision of the renowned ISO/IEC software quality model and measurement methodology. This quality model separates the software quality characteristic of portability into the subcharacteristics of installability, adaptability, and replaceability. Each of these characteristics forms one part of the measurement framework. This thesis targets each characteristic with a separate analysis, metrics derivation, evaluation, and validation. We discuss existing metrics from the body of literature and derive new extensions speciffically tailored to the evaluation of service-oriented and process-aware software. Proposed metrics are defined formally and validated theoretically using an informal and a formal validation framework. Furthermore, the computation of the metrics has been prototypically implemented. This implementation is used to evaluate metrics performance in experiments based on large scale software libraries obtained from public open source software repositories. In summary, this thesis provides evidence that contemporary standards and their implementations are not sufficient for enabling the portability of process-aware and service-oriented applications. Furthermore, it proposes, validates, and practically evaluates a framework for measuring portability

    Similarity of business process models : metrics and evaluation

    Get PDF
    It is common for large and complex organizations to maintain repositories of business process models in order to document and to continuously improve their operations. Given such a repository, this paper deals with the problem of retrieving those process models in the repository that most closely resemble a given process model or fragment thereof. The paper presents three similarity metrics that can be used to answer such queries: (i) label matching similarity that compares the labels attached to process model elements; (ii) structural similarity that compares element labels as well as the topology of process models; and (iii) behavioral similarity that compares element labels as well as causal relations captured in the process model. These similarity metrics are experimentally evaluated in terms of precision and recall, and in terms of correlation of the metrics with respect to human judgement. The experimental results show that all three metrics yield comparable results, with structural similarity slightly outperforming the other two metrics. Also, all three metrics outperform traditional search engines when it comes to searching through a repository for similar business process models

    Xcerpt: A Rule-Based Query and Transformation Language for the Web

    Get PDF
    This thesis investigates querying the Web and the Semantic Web. It proposes a new rulebased query language called Xcerpt. Xcerpt differs from other query languages in that it uses patterns instead of paths for the selection of data, and in that it supports both rule chaining and recursion. Rule chaining serves for structuring large queries, as well as for designing complex query programs (e.g. involving queries to the Semantic Web), and for modelling inference rules. Query patterns may contain special constructs like partial subqueries, optional subqueries, or negated subqueries that account for the particularly flexible structure of data on the Web. Furthermore, this thesis introduces the syntax of the language Xcerpt, which is illustrated on a large collection of use cases both from the conventional Web and the Semantic Web. In addition, a declarative semantics in form of a Tarski-style model theory is described, and an algorithm is proposed that performs a backward chaining evaluation of Xcerpt programs. This algorithm has also been implemented (partly) in a prototypical runtime system. A salient aspect of this algorithm is the specification of a non-standard unification algorithm called simulation unification that supports the new query constructs described above. This unification is symmetric in the sense that variables in both terms can be bound. On the other hand it is in contrast to standard unification assymmetric in the sense that the unification determines that the one term is a subterm of the other term.Diese Arbeit untersucht das Anfragen des Webs und des Semantischen Webs. Sie stellt eine neue regel-basierte Anfragesprache namens Xcerpt vor. Xcerpt unterscheidet sich von anderen Anfragesprachen insofern, als dass es zur Selektion von Daten sog. Pattern (,,Muster'') verwendet und sowohl Regelschliessen als auch Rekursion unterstützt, was sowohl zur Strukturierung größerer Anfragen als auch zur Erstellung komplexer Anfrageprogramme, und zur Modellierung von Inferenzregeln dient. Anfrage-Pattern können spezielle Konstrukte, wie partielle Teilanfragen, optionale Teilanfragen, oder negierte Teilanfragen, enthalten, die der besonders flexiblen Struktur von Daten im Web genügen. In dieser Arbeit wird weiterhin die Syntax von Xcerpt eingeführt, und mit Hilfe mehrerer Anwendungsszenarien sowohl aus dem konventionellen als auch aus dem semantischen Web erläutert. Ausserdem wird eine deklarative Semantik im Stil von Tarski's Modelltheorie beschrieben und ein Algorithmus vorgeschlagen, der eine rückwärtsschliessende Auswertung von Xcerpt durchführt und in einem prototypischen Laufzeitsystem implementiert wurde. Wesentlicher Bestandteil des Rückwärtsschliessens ist die Spezifikation eines nicht-standard Unifikations-Algorithmus, der die oben genannten speziellen Xcerpt-Konstrukte berücksichtigt. Diese Unifikation ist symmetrisch in dem Sinne, dass sie Variablen in beiden angeglichenen (,,unifizierten'') Termen binden kann. Andererseits ist sie im Gegensatz zur Standardunifikation assymmetrisch in dem Sinne, dass der dadurch geleistete Angleich den einen Term als ,,Teilterm'' des anderen erkennt

    How To Touch a Running System

    Get PDF
    The increasing importance of distributed and decentralized software architectures entails more and more attention for adaptive software. Obtaining adaptiveness, however, is a difficult task as the software design needs to foresee and cope with a variety of situations. Using reconfiguration of components facilitates this task, as the adaptivity is conducted on an architecture level instead of directly in the code. This results in a separation of concerns; the appropriate reconfiguration can be devised on a coarse level, while the implementation of the components can remain largely unaware of reconfiguration scenarios. We study reconfiguration in component frameworks based on formal theory. We first discuss programming with components, exemplified with the development of the cmc model checker. This highly efficient model checker is made of C++ components and serves as an example for component-based software development practice in general, and also provides insights into the principles of adaptivity. However, the component model focuses on high performance and is not geared towards using the structuring principle of components for controlled reconfiguration. We thus complement this highly optimized model by a message passing-based component model which takes reconfigurability to be its central principle. Supporting reconfiguration in a framework is about alleviating the programmer from caring about the peculiarities as much as possible. We utilize the formal description of the component model to provide an algorithm for reconfiguration that retains as much flexibility as possible, while avoiding most problems that arise due to concurrency. This algorithm is embedded in a general four-stage adaptivity model inspired by physical control loops. The reconfiguration is devised to work with stateful components, retaining their data and unprocessed messages. Reconfiguration plans, which are provided with a formal semantics, form the input of the reconfiguration algorithm. We show that the algorithm achieves perceived atomicity of the reconfiguration process for an important class of plans, i.e., the whole process of reconfiguration is perceived as one atomic step, while minimizing the use of blocking of components. We illustrate the applicability of our approach to reconfiguration by providing several examples like fault-tolerance and automated resource control

    Privacy protection in context aware systems.

    Get PDF
    Smartphones, loaded with users’ personal information, are a primary computing device for many. Advent of 4G networks, IPV6 and increased number of subscribers to these has triggered a host of application developers to develop softwares that are easy to install on the mobile devices. During the application download process, users accept the terms and conditions that permit revelation of private information. The free application markets are sustainable as the revenue model for most of these service providers is through profiling of users and pushing advertisements to the users. This creates a serious threat to users privacy and hence it is important that “privacy protection mechanisms” should be in place to protect the users’ privacy. Most of the existing solutions falsify or modify the information in the service request and starve the developers of their revenue. In this dissertation, we attempt to bridge the gap by proposing a novel integrated CLOPRO framework (Context Cloaking Privacy Protection) that achieves Identity privacy, Context privacy and Query privacy without depriving the service provider of sustainable revenue made from the CAPPA (Context Aware Privacy Preserving Advertising). Each service request has three parameters: identity, context and actual query. The CLOPRO framework reduces the risk of an adversary linking all of the three parameters. The main objective is to ensure that no single entity in the system has all the information about the user, the queries or the link between them, even though the user gets the desired service in a viable time frame. The proposed comprehensive framework for privacy protecting, does not require the user to use a modified OS or the service provider to modify the way an application developer designs and deploys the application and at the same time protecting the revenue model of the service provider. The system consists of two non-colluding servers, one to process the location coordinates (Location server) and the other to process the original query (Query server). This approach makes several inherent algorithmic and research contributions. First, we have proposed a formal definition of privacy and the attack. We identified and formalized that the privacy is protected if the transformation functions used are non-invertible. Second, we propose use of clustering of every component of the service request to provide anonymity to the user. We use a unique encrypted identity for every service request and a unique id for each cluster of users that ensures Identity privacy. We have designed a Split Clustering Anonymization Algorithms (SCAA) that consists of two algorithms Location Anonymization Algorithm (LAA) and Query Anonymization Algorithm (QAA). The application of LAA replaces the actual location for the users in the cluster with the centroid of the location coordinates of all users in that cluster to achieve Location privacy. The time of initiation of the query is not a part of the message string to the service provider although it is used for identifying the timed out requests. Thus, Context privacy is achieved. To ensure the Query privacy, the generic queries (created using QAA) are used that cover the set of possible queries, based on the feature variations between the queries. The proposed CLOPRO framework associates the ads/coupons relevant to the generic query and the location of the users and they are sent to the user along with the result without revealing the actual user, the initiation time of query or the location and the query, of the user to the service provider. Lastly, we introduce the use of caching in query processing to improve the response time in case of repetitive queries. The Query processing server caches the query result. We have used multiple approaches to prove that privacy is preserved in CLOPRO system. We have demonstrated using the properties of the transformation functions and also using graph theoretic approaches that the user’s Identity, Context and Query is protected against the curious but honest adversary attack, fake query and also replay attacks with the use of CLOPRO framework. The proposed system not only provides \u27k\u27 anonymity, but also satisfies the \u3c k; s \u3e and \u3c k; T \u3e anonymity properties required for privacy protection. The complexity of our proposed algorithm is O(n)

    Synthesising end-to-end security schemes through endorsement intermediaries

    Get PDF
    Composing secure interaction protocols dynamically for e-commerce continue to pose a number of challenges, such as lack of standard notations for expressing requirements and the difficulty involved in enforcing them. Furthermore, interaction with unknown entities may require finding common trusted intermediaries. Securing messages sent through such intermediaries require schemes that provide end-to-end security guarantees. In the past, e-commerce protocols such as SET were created to provide such end-to-end guarantees. However, such complex hand crafted protocols proved difficult to model check. This thesis addresses the end-to-end problems in an open dynamic setting where trust relationships evolve, and requirements of interacting entities change over time. Before interaction protocols can be synthesised, a number of research questions must be addressed. Firstly, to meet end-to-end security requirements, the security level along the message path must be made to reflect the requirements. Secondly, the type of endorsement intermediaries must reflect the message category. Thirdly, intermediaries must be made liable for their endorsements. This thesis proposes a number of solutions to address the research problems. End-to-end security requirements were arrived by aggregating security requirements of all interacting parties. These requirements were enforced by interleaving and composing basic schemes derived from challenge-response mechanisms. The institutional trust promoting mechanism devised allowed all vital data to be endorsed by authorised category specific intermediaries. Intermediaries were made accountable for their endorsements by being required to discharge or transfer proof obligations placed on them. The techniques devised for aggregating and enforcing security requirements allow dynamic creation of end-to-end security schemes. The novel interleaving technique devised allows creation of provably secure multiparty schemes for any number of recipients. The structured technique combining compositional approach with appropriate invariants and preconditions makes model checking of synthesised schemes unnecessary. The proposed framework combining endorsement trust with schemes making intermediaries accountable provides a way to alleviate distrust between previously unknown e-commerce entities

    Eighth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools, Aarhus, Denmark, October 22-24, 2007

    Get PDF
    This booklet contains the proceedings of the Eighth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 22-24, 2007. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    A network QoS management architecture for virtualization environments

    Get PDF
    Network quality of service (QoS) and its management are concerned with providing, guaranteeing and reporting properties of data flows within computer networks. For the past two decades, virtualization has been becoming a very popular tool in data centres, yet, without network QoS management capabilities. With virtualization, the management focus shifts from physical components and topologies, towards virtual infrastructures (VI) and their purposes. VIs are designed and managed as independent isolated entities. Without network QoS management capabilities, VIs cannot offer the same services and service levels as physical infrastructures can, leaving VIs at a disadvantage with respect to applicability and efficiency. This thesis closes this gap and develops a management architecture, enabling network QoS management in virtulization environments. First, requirements are dervied, based on real world scenarios, yielding a validation reference for the proposed architecture. After that, a life cycle for VIs and a taxonomy for network links and virtual components are introduced, to arrange the network QoS management task with the general management of virtualization environments and enabling the creation of technology specific adaptors for integrating the technologies and sub-services used in virtualization environments. The core aspect, shaping the proposed management architecture, is a management loop and its corresponding strategy for identifying and ordering sub-tasks. Finally, a prototypical implementation showcases that the presented management approach is suited for network QoS management and enforcement in virtualization environments. The architecture fulfils its purpose, fulfilling all identified requirements. Ultimately, network QoS management is one amongst many aspects to management in virtualization environments and the herin presented architecture shows interfaces to other management areas, where integration is left as future work.Verwaltungsaufgaben für Netzdienstgüte umfassen das Bereitstellen, Sichern und Berichten von Flusseigenschaften in Rechnernetzen. Während der letzen zwei Jahrzehnte entwickelte sich Virtualisierung zu einer Schlüsseltechnologie für Rechenzentren, bisher ohne Möglichkeiten zum Verwalten der Netzdienstgüte. Der Einsatz von Virtualisierung verschiebt den Fokus beim Betrieb von Rechenzentren weg von physischen Komponenten und Netzen, hin zu virtuellen Infrastrukturen (VI) und ihren Einsatzzwecken. VIs werden als unabhängige, voneinander isolierte Einheiten entwickelt und verwaltet. Ohne Netzdienstgüte, sind VIs nicht so vielseitig und effizient einsetzbar wie physische Aufbauten. Diese Arbeit schließt diese Lücke mit der Entwicklung einer Managementarchitektur zur Verwaltung der Netzdienstgüte in Virtualisierungsumgebungen. Zunächst werden Anforderungen aus realen Szenarios abgeleitet, mit denen Architekturen bewertet werden können. Zur Abgrenzung der speziellen Aufgabe Netzdienstgüteverwaltung innerhalb des allgemeinen Managementproblems, wird anschließend ein Lebenszyklusmodell für VIs vorgestellt. Die Entwicklung einer Taxonomie für Kopplungen und Komponenten ermöglicht technologiespezifische Adaptoren zur Integration von in Virtualisierungsumgebungen eingesetzten Technologien. Kerngedanke hinter der entwickelten Architektur ist eine Rückkopplungsschleife und ihre einhergehende Methode zur Strukturierung und Anordnung von Teilproblemen. Abschließend zeigt eine prototypische Implementierung, dass dieser Ansatz für Verwaltung und Durchsetzung von Netzdienstgüte in Virtualisierungsumgebungen geeignet ist. Die Architektur kann ihren Zweck sowie die gestellten Anforderungen erfüllen. Schlussendlich ist Netzdienstgüte ein Bereich von vielen beim Betrieb von Virtualisierungsumgebungen. Die Architektur zeigt Schnittstellen zu anderen Bereichen auf, deren Integration zukünftigen Arbeiten überlassen bleibt

    Contributions to Securing Software Updates in IoT

    Get PDF
    The Internet of Things (IoT) is a large network of connected devices. In IoT, devices can communicate with each other or back-end systems to transfer data or perform assigned tasks. Communication protocols used in IoT depend on target applications but usually require low bandwidth. On the other hand, IoT devices are constrained, having limited resources, including memory, power, and computational resources. Considering these limitations in IoT environments, it is difficult to implement best security practices. Consequently, network attacks can threaten devices or the data they transfer. Thus it is crucial to react quickly to emerging vulnerabilities. These vulnerabilities should be mitigated by firmware updates or other necessary updates securely. Since IoT devices usually connect to the network wirelessly, such updates can be performed Over-The-Air (OTA). This dissertation presents contributions to enable secure OTA software updates in IoT. In order to perform secure updates, vulnerabilities must first be identified and assessed. In this dissertation, first, we present our contribution to designing a maturity model for vulnerability handling. Next, we analyze and compare common communication protocols and security practices regarding energy consumption. Finally, we describe our designed lightweight protocol for OTA updates targeting constrained IoT devices. IoT devices and back-end systems often use incompatible protocols that are unable to interoperate securely. This dissertation also includes our contribution to designing a secure protocol translator for IoT. This translation is performed inside a Trusted Execution Environment (TEE) with TLS interception. This dissertation also contains our contribution to key management and key distribution in IoT networks. In performing secure software updates, the IoT devices can be grouped since the updates target a large number of devices. Thus, prior to deploying updates, a group key needs to be established among group members. In this dissertation, we present our designed secure group key establishment scheme. Symmetric key cryptography can help to save IoT device resources at the cost of increased key management complexity. This trade-off can be improved by integrating IoT networks with cloud computing and Software Defined Networking (SDN).In this dissertation, we use SDN in cloud networks to provision symmetric keys efficiently and securely. These pieces together help software developers and maintainers identify vulnerabilities, provision secret keys, and perform lightweight secure OTA updates. Furthermore, they help devices and systems with incompatible protocols to be able to interoperate
    corecore