110 research outputs found

    SECURITY POLICY ENFORCEMENT IN APPLICATION ENVIRONMENTS USING DISTRIBUTED SCRIPT-BASED CONTROL STRUCTURES

    Get PDF
    Business processes involving several partners in different organisations impose demanding requirements on procedures for specification, execution and maintenance. A framework referred to as business process management (BPM) has evolved for this purpose over the last ten years. Other approaches, such as service-oriented architecture (SOA) or the concept of virtual organisations (VOs), assist in the definition of architectures and procedures for modelling and execution of so-called collaborative business processes (CBPs). Methods for the specification of business processes play a central role in this context, and, several standards have emerged for this purpose. Among these, Web Services Business Process Execution Language (WS-BPEL, usually abbreviated BPEL) has evolved to become the de facto standard for business process definition. As such, this language has been selected as the foundation for the research in this thesis. Having a broadly accepted standard would principally allow the specification of business processes in a platform-independent manner, including the capability to specify them at one location and have them executed at others (possibly spread across different organisations). Though technically feasible, this approach has significant security implications, particularly on the side that is to execute a process. The research project focused upon these security issues arising when business processes are specified and executed in a distributed manner. The central goal has been the development of methods to cope with the security issues arising when BPEL as a standard is deployed in such a way exploiting the significant aspect of a standard to be platform-independent The research devised novel methods for specifying security policies in such a manner that the assessment of compliance with these policies is greatly facilitated such that the assessment becomes suited to be performed automatically. An analysis of the securityrelevant semantics of BPEL as a specification language was conducted that resulted in the identification of so-called security-relevant semantic patterns. Based on these results, methods to specify security policy-implied restrictions in terms of such semantic patterns and to assess the compliance of BPEL scripts with these policies have been developed. These methods are particularly suited for assessment of remotely defined BPEL scripts since they allow for pre-execution enforcement of local security policies thereby mitigating or even removing the security implications involved in distributed definition and execution of business processes. As initially envisaged, these methods are comparatively easy to apply, as they are based on technologies customary for practitioners in this field. The viability of the methods proposed for automatic compliance assessment has been proven via a prototypic implementation of the essential functionality required for proof-of-concept.Darmstadt Node of the NRG Network at University of Applied Sciences Darmstad

    Pemetaan Secara Sistematis Pada Metrik Kualitas Perangkat Lunak

    Get PDF
    . Software quality assurance is one method to increase quality of software. Improvement of software quality can be measured with software quality metric. Software quality metrics are part of software quality measurement model. Currently software quality models have a very diverse types, so that software quality metrics become increasingly diverse. The various types of metrics to measure the quality of software create proper metrics selection issues to fit the desired quality measurement parameters. Another problem is the validation need to be performed on these metrics in order to obtain objective and valid results. In this paper, a systematic mapping of the software quality metric is conducted in the last nine years. This paper brings up issues in software quality metrics that can be used by other researchers. Furthermore, current trends are introduced and discussed

    Acta Cybernetica : Volume 21. Number 3.

    Get PDF

    ARCHITECTURE-BASED RELIABILITY ANALYSIS OF WEB SERVICES

    Get PDF
    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity of the overall reliability and performance to the behavior of the underlying WS architectures and AS components are not well-understood. In other words, the current research on the architecture-based analysis of WSs is limited. This dissertation presents a novel methodology for modeling the reliability and performance of web services. WSs are treated as atomic entities but the AS is broken down into layers. More specifically, interactions of WSs with the underlying layers of an AS are investigated. One important feature of the research is investigating the impact of dynamic parameters that exist at the layers, such as configuration parameters. These parameters may have negative impact on WSs performance if they are not configured properly. WSs are developed in house and the AS considered is JBoss AS. An experimental environment is setup so that controlled service requests can be generated and important performance metrics can be recorded under various configurations of the AS. On the other hand, a simulation model is developed from the source code and run-time behavior of the existing WS and AS implementations. The model mimics the logical behavior of the WSs based on their communication with the AS layers. The simulation results are compared to the experimental results to ensure the correctness of the model. The architecture of the simulation model, which is based on Stochastic Petri Nets (SPN), is modularized in accordance to the layers and their interactions. As the web services are often executed in a complex and distributed environment, the modularized approach enables a user or a designer to observe and investigate the performance of the entire system under various conditions. In contrast, most approaches to WSs analyses are monolithic in that the entire system is treated as a closed box. The results show that 1) the simulation model can be a viable tool for measuring the performance and reliability of WSs under different loads and conditions that may be of great interest to WS designers and the professionals involved; 2) Configuration parameters have big impacts on the overall performance; 3) The simulation model can be tuned to account for various speeds in terms of communication, hardware, and software; 4) As the simulation model is modularized, it may be used as a foundation for aggregating the modules (layers), nullifying modules, or the model can be enhanced to include other aspects of the WS architecture such as network characteristics and the hardware/operating system on which the AS and WSs execute; and 5) The simulation model is beneficial to predict the performance of web services for those cases that are difficult to replicate in a field study

    Efficiency Improvements in the Quality Assurance Process for Data Races

    Get PDF
    As the usage of concurrency in software has gained importance in the last years, and is still rising, new types of defects increasingly appeared in software. One of the most prominent and critical types of such new defect types are data races. Although research resulted in an increased effectiveness of dynamic quality assurance regarding data races, the efficiency in the quality assurance process still is a factor preventing widespread practical application. First, dynamic quality assurance techniques used for the detection of data races are inefficient. Too much effort is needed for conducting dynamic quality assurance. Second, dynamic quality assurance techniques used for the analysis of reported data races are inefficient. Too much effort is needed for analyzing reported data races and identifying issues in the source code. The goal of this thesis is to enable efficiency improvements in the process of quality assurance for data races by: (1) analyzing the representation of the dynamic behavior of a system under test. The results are used to focus instrumentation of this system, resulting in a lower runtime overhead during test execution compared to a full instrumentation of this system. (2) Analyzing characteristics and preprocessing of reported data races. The results of the preprocessing are then provided to developers and quality assurance personnel, enabling an analysis and debugging process, which is more efficient than traditional analysis of data race reports. Besides dynamic data race detection, which is complemented by the solution, all steps in the process of dynamic quality assurance for data races are discussed in this thesis. The solution for analyzing UML Activities for nodes possibly executing in parallel to other nodes or themselves is based on a formal foundation using graph theory. A major problem that has been solved in this thesis was the handling of cycles within UML Activities. This thesis provides a dynamic limit for the number of cycle traversals, based on the elements of each UML Activity to be analyzed and their semantics. Formal proofs are provided with regard to the creation of directed acyclic graphs and with regard to their analysis concerning the identification of elements that may be executed in parallel to other elements. Based on an examination of the characteristics of data races and data race reports, the results of dynamic data race detection are preprocessed and the outcome of this preprocessing is presented to users for further analysis. This thesis further provides an exemplary application of the solution idea, of the results of analyzing UML Activities, and an exemplary examination of the efficiency improvement of the dynamic data race detection, which showed a reduction in the runtime overhead of 44% when using the focused instrumentation compared to full instrumentation. Finally, a controlled experiment has been set up and conducted to examine the effects of the preprocessing of reported data races on the efficiency of analyzing data race reports. The results show that the solution presented in this thesis enables efficiency improvements in the analysis of data race reports between 190% and 660% compared to using traditional approaches. Finally, opportunities for future work are shown, which may enable a broader usage of the results of this thesis and further improvements in the efficiency of quality assurance for data races.Da die Verwendung von Concurrency in Software in den letzten Jahren an Bedeutung gewonnen hat, und immer noch gewinnt, sind zunehmend neue Arten von Fehlern in Software aufgetaucht. Eine der prominentesten und kritischsten Arten solcher neuer Fehlertypen sind data races. Auch wenn die Forschung zu einer steigenden Effektivität von Verfahren der dynamischen Qualitätssicherung geführt hat, so ist die Effizienz im Prozess der Qualitätssicherung noch immer ein Faktor, der eine weitverbreitete praktische Anwendung verhindert. Zum einen wird zu viel Aufwand benötigt, um dynamische Qualitätssicherung durchzuführen. Zum anderen sind die Verfahren zur Analyse gemeldeter data races ineffizient; es wird zu viel Aufwand benötigt, um gemeldete data races zu analysieren und Probleme im Quellcode zu identifizieren. Das Ziel dieser Dissertation ist es, Effizienzsteigerungen im Qualitätssicherungsprozess für data races zu ermöglichen, durch: (1) Analyse der Repräsentation des dynamischen Verhaltens des zu testenden Systems. Mit den Ergebnissen wird die Instrumentierung dieses Systems fokussiert, so dass ein im Vergleich zur vollen Instrumentierung des Systems geringerer Mehraufwand an Laufzeit benötigt wird. (2) Analyse der Charakteristiken von und Vorverarbeitung der gemeldeten data races. Die Ergebnisse der Vorverarbeitung werden Mitarbeitenden in der Entwicklung und Qualitätssicherung präsentiert, so dass ein Analyse- und Fehlerbehebungsprozess ermöglicht wird, welcher effizienter als traditionelle Analysen gemeldeter data races ist. Mit Ausnahme der dynamischen data race Erkennung, welche durch die Lösung komplementiert wird, werden alle Schritte im Prozess der dynamischen Qualitätssicherung für data races in dieser Dissertation behandelt. Die Lösung zur Analyse von UML Aktivitäten auf Knoten, die möglicherweise parallel zu sich selbst oder anderen Knoten ausgeführt werden, basiert auf einer formalen Grundlage aus dem Bereich der Graphentheorie. Eines der Hauptprobleme, welches gelöst wurde, war die Verarbeitung von Zyklen innerhalb der UML Aktivitäten. Diese Dissertation führt ein dynamisches Limit für die Anzahl an Zyklusdurchläufen ein, welches die Elemente jeder zu analysierenden UML Aktivität sowie deren Semantiken berücksichtigt. Ebenso werden formale Beweise präsentiert in Bezug auf die Erstellung gerichteter azyklischer Graphen, sowie deren Analyse zur Identifizierung von Elementen, die parallel zu anderen Elementen ausgeführt werden können. Auf Basis einer Untersuchung von Charakteristiken von data races sowie Meldungen von data races werden die Ergebnisse der dynamischen Erkennung von data races vorverarbeitet, und das Ergebnis der Vorverarbeitung gemeldeter data races wird Benutzern zur weiteren Analyse präsentiert. Diese Dissertation umfasst weiterhin eine exemplarische Anwendung der Lösungsidee und der Analyse von UML Aktivitäten, sowie eine exemplarische Untersuchung der Effizienzsteigerung der dynamischen Erkennung von data races. Letztere zeigte eine Reduktion des Mehraufwands an Laufzeit von 44% bei fokussierter Instrumentierung im Vergleich zu voller Instrumentierung auf. Abschließend wurde ein kontrolliertes Experiment aufgesetzt und durchgeführt, um die Effekte der Vorverarbeitung gemeldeter data races auf die Effizienz der Analyse dieser gemeldeten data races zu untersuchen. Die Ergebnisse zeigen, dass die in dieser Dissertation vorgestellte Lösung verglichen mit traditionellen Ansätzen Effizienzsteigerungen in der Analyse gemeldeter data races von 190% bis zu 660% ermöglicht. Abschließend werden Möglichkeiten für zukünftige Arbeiten vorgestellt, welche eine breitere Anwendung der Ergebnisse dieser Dissertation ebenso wie weitere Effizienzsteigerungen im Qualitätssicherungsprozess für data races ermöglichen können

    Programming and parallelising applications for distributed infrastructures

    Get PDF
    The last decade has witnessed unprecedented changes in parallel and distributed infrastructures. Due to the diminished gains in processor performance from increasing clock frequency, manufacturers have moved from uniprocessor architectures to multicores; as a result, clusters of computers have incorporated such new CPU designs. Furthermore, the ever-growing need of scienti c applications for computing and storage capabilities has motivated the appearance of grids: geographically-distributed, multi-domain infrastructures based on sharing of resources to accomplish large and complex tasks. More recently, clouds have emerged by combining virtualisation technologies, service-orientation and business models to deliver IT resources on demand over the Internet. The size and complexity of these new infrastructures poses a challenge for programmers to exploit them. On the one hand, some of the di culties are inherent to concurrent and distributed programming themselves, e.g. dealing with thread creation and synchronisation, messaging, data partitioning and transfer, etc. On the other hand, other issues are related to the singularities of each scenario, like the heterogeneity of Grid middleware and resources or the risk of vendor lock-in when writing an application for a particular Cloud provider. In the face of such a challenge, programming productivity - understood as a tradeo between programmability and performance - has become crucial for software developers. There is a strong need for high-productivity programming models and languages, which should provide simple means for writing parallel and distributed applications that can run on current infrastructures without sacri cing performance. In that sense, this thesis contributes with Java StarSs, a programming model and runtime system for developing and parallelising Java applications on distributed infrastructures. The model has two key features: first, the user programs in a fully-sequential standard-Java fashion - no parallel construct, API call or pragma must be included in the application code; second, it is completely infrastructure-unaware, i.e. programs do not contain any details about deployment or resource management, so that the same application can run in di erent infrastructures with no changes. The only requirement for the user is to select the application tasks, which are the model's unit of parallelism. Tasks can be either regular Java methods or web service operations, and they can handle any data type supported by the Java language, namely les, objects, arrays and primitives. For the sake of simplicity of the model, Java StarSs shifts the burden of parallelisation from the programmer to the runtime system. The runtime is responsible from modifying the original application to make it create asynchronous tasks and synchronise data accesses from the main program. Moreover, the implicit inter-task concurrency is automatically found as the application executes, thanks to a data dependency detection mechanism that integrates all the Java data types. This thesis provides a fairly comprehensive evaluation of Java StarSs on three di erent distributed scenarios: Grid, Cluster and Cloud. For each of them, a runtime system was designed and implemented to exploit their particular characteristics as well as to address their issues, while keeping the infrastructure unawareness of the programming model. The evaluation compares Java StarSs against state-of-the-art solutions, both in terms of programmability and performance, and demonstrates how the model can bring remarkable productivity to programmers of parallel distributed applications

    Enhancing coverage adequacy of service compositions after runtime adaptation

    Get PDF
    Laufzeitüberwachung (engl. runtime monitoring) ist eine wichtige Qualitätssicherungs-Technik für selbstadaptive Service-Komposition. Laufzeitüberwachung überwacht den Betrieb der Service-Komposition. Zur Bestimmung der Genauigkeit von Software-Tests werden häufig Überdeckungskriterien verwendet. Überdeckungskriterien definieren Anforderungen die Software-Tests erfüllen muss. Wegen ihrer wichtigen Rolle im Software-Testen haben Forscher Überdeckungskriterien an die Laufzeitüberwachung von Service-Komposition angepasst. Die passive Art der Laufzeitüberwachung und die adaptive Art der Service-Komposition können die Genauigkeit von Software-Tests zur Laufzeit negativ beeinflussen. Dies kann jedoch die Zuversicht in der Qualität der Service-Komposition begrenzen. Um die Überdeckung selbstadaptiver Service-Komposition zur Laufzeit zu verbessern, untersucht diese Arbeit, wie die Laufzeitüberwachung und Online-Testen kombiniert werden können. Online-Testen bedeutet dass Testen parallel zu der Verwendung einer Service-Komposition erfolgt. Zunächst stellen wir einen Ansatz vor, um gültige Execution-Traces für Service-Komposition zur Laufzeit zu bestimmen. Der Ansatz berücksichtigt die Execution-Traces von Laufzeitüberwachung und (Online)-Testen. Er berücksichtigt Änderungen im Workflow und Software-Services eines Service-Komposition. Zweitens, definieren wir Überdeckungskriterien für Service-Komposition. Die Überdeckungskriterien berücksichtigen Ausführungspläne einer Service-Komposition und berücksichtigen die Überdeckung für Software-Services und die Service-Komposition. Drittens stellen wir Online-Testfälle Priorisierungs Techniken, um die Abdeckungniveau einer Service-Komposition schneller zu erreichen. Die Techniken berücksichtigen die Überdeckung einer Service-Komposition durch beide Laufzeitüberwachung und Online-Tests. Zusätzlich, berücksichtigen sie die Ausführungszeit von Testfällen und das Nutzungsmodell der Service-Komposition. Viertens stellen wir einen Rahmen für die Laufzeitüberwachung und Online-Testen von Software-Services und Service-Komposition, genannt PROSA, vor. PROSA bietet technische Unterstützung für die oben genannten Beiträge. Wir evaluieren die Beiträge anhand einer beispielhaften Service-Komposition, die häufig in dem Forschungsgebiet Service-oriented Computing eingesetzt wird.Runtime monitoring (or monitoring for short) is a key quality assurance technique for self-adaptive service compositions. Monitoring passively observes the runtime behaviour of service compositions. Coverage criteria are extensively used for assessing the adequacy (or thoroughness) of software testing. Coverage criteria specify certain requirements on software testing. The importance of coverage criteria in software testing has motivated researchers to adapt them to the monitoring of service composition. However, the passive nature of monitoring and the adaptive nature of service composition could negatively influence the adequacy of monitoring, thereby limiting the confidence in the quality of the service composition. To enhance coverage adequacy of self-adaptive service compositions at runtime, this thesis investigates how to combine runtime monitoring and online testing. Online testing means testing a service composition in parallel to its actual usage and operation. First, we introduce an approach for determining valid execution traces for service compositions at runtime. The approach considers execution traces of both monitoring and (online) testing. It considers modifications in both workflow and constituent services of a service composition. Second, we define coverage criteria for service compositions. The criteria consider execution plans of a service composition for coverage assessment and consider the coverage of an abstract service and the overall service composition. Third, we introduce online-test-case prioritization techniques to achieve a faster coverage of a service composition. The techniques employ coverage of a service composition from both monitoring and online testing, execution time of test cases, and the usage model of the service composition. Fourth, we introduce a framework for monitoring and online testing of services and service compositions called PROSA. PROSA provides technical support for the aforementioned contributions. We evaluate the contributions of this thesis using service compositions frequently used in service-oriented computing research

    Data-Driven Detection and Diagnosis of System-Level Failures in Middleware-Based Service Compositions

    Get PDF
    Service-oriented technologies have simplified the development of large, complex software systems that span administrative boundaries. Developers have been enabled to build applications as compositions of services through middleware that hides much of the underlying complexity. The resulting applications inhabit complex, multi-tier operating environments that pose many challenges to their reliable operation and often lead to failures at runtime. Two key aspects of the time to repair a failure are the time to its detection and to the diagnosis of its cause. The prevalent approach to detection and diagnosis is primarily based on ad-hoc monitoring as well as operator experience and intuition. This is inefficient and leads to decreased availability. We propose an approach to data-driven detection and diagnosis in order to decrease the repair time of failures in middleware-based service compositions. Data-driven diagnosis supports system operators with information about the operation and structure of a service composition. We discuss how middleware-based service compositions can be monitored in a comprehensive, yet non-intrusive manner and present a process to discover system structure by processing deployment information that is commonly reified in such systems. We perform a controlled experiment that compares the performance of 22 participants using either a standard or the data-driven approach to diagnose several failures injected into a real-world service composition. We find that system operators using the latter approach are able to achieve significantly higher success rates and lower diagnosis times. Data-driven detection is based on the automation of failure detection through applying an outlier detection technique to multi-variate monitoring data. We evaluate the effectiveness of one-class classification for this purpose and determine a simple approach to select subsets of metrics that afford highly accurate failure detection
    corecore