30 research outputs found

    Portability of Process-Aware and Service-Oriented Software: Evidence and Metrics

    Get PDF
    Modern software systems are becoming increasingly integrated and are required to operate over organizational boundaries through networks. The development of such distributed software systems has been shaped by the orthogonal trends of service-orientation and process-awareness. These trends put an emphasis on technological neutrality, loose coupling, independence from the execution platform, and location transparency. Execution platforms supporting these trends provide context and cross-cutting functionality to applications and are referred to as engines. Applications and engines interface via language standards. The engine implements a standard. If an application is implemented in conformance to this standard, it can be executed on the engine. A primary motivation for the usage of standards is the portability of applications. Portability, the ability to move software among different execution platforms without the necessity for full or partial reengineering, protects from vendor lock-in and enables application migration to newer engines. The arrival of cloud computing has made it easy to provision new and scalable execution platforms. To enable easy platform changes, existing international standards for implementing service-oriented and process-aware software name the portability of standardized artifacts as an important goal. Moreover, they provide platform-independent serialization formats that enable the portable implementation of applications. Nevertheless, practice shows that service-oriented and process-aware applications today are limited with respect to their portability. The reason for this is that engines rarely implement a complete standard, but leave out parts or differ in the interpretation of the standard. As a consequence, even applications that claim to be portable by conforming to a standard might not be so. This thesis contributes to the development of portable service-oriented and process-aware software in two ways: Firstly, it provides evidence for the existence of portability issues and the insufficiency of standards for guaranteeing software portability. Secondly, it derives and validates a novel measurement framework for quantifying portability. We present a methodology for benchmarking the conformance of engines to a language standard and implement it in a fully automated benchmarking tool. Several test suites of conformance tests for two different languages, the Web Services Business Process Execution Language 2.0 and the Business Process Model and Notation 2.0, allow to uncover a variety of standard conformance issues in existing engines. This provides evidence that the standard-based portability of applications is a real issue. Based on these results, this thesis derives a measurement framework for portability. The framework is aligned to the ISO/IEC Systems and software Quality Requirements and Evaluation method, the recent revision of the renowned ISO/IEC software quality model and measurement methodology. This quality model separates the software quality characteristic of portability into the subcharacteristics of installability, adaptability, and replaceability. Each of these characteristics forms one part of the measurement framework. This thesis targets each characteristic with a separate analysis, metrics derivation, evaluation, and validation. We discuss existing metrics from the body of literature and derive new extensions speciffically tailored to the evaluation of service-oriented and process-aware software. Proposed metrics are defined formally and validated theoretically using an informal and a formal validation framework. Furthermore, the computation of the metrics has been prototypically implemented. This implementation is used to evaluate metrics performance in experiments based on large scale software libraries obtained from public open source software repositories. In summary, this thesis provides evidence that contemporary standards and their implementations are not sufficient for enabling the portability of process-aware and service-oriented applications. Furthermore, it proposes, validates, and practically evaluates a framework for measuring portability

    A governance framework for algorithmic accountability and transparency

    Get PDF
    Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refer to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. big data), which can be paired with machine learning methods in order to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people's human rights (e.g. critical safety decisions in autonomous vehicles; allocation of health and social service resources, etc.). This study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on a review and analysis of existing proposals for governance of algorithmic systems, a set of four policy options are proposed, each of which addresses a different aspect of algorithmic transparency and accountability: 1. awareness raising: education, watchdogs and whistleblowers; 2. accountability in public-sector use of algorithmic decision-making; 3. regulatory oversight and legal liability; and 4. global coordination for algorithmic governance

    Workload mix definition for benchmarking BPMN 2.0 Workflow Management Systems

    Get PDF
    Nowadays, enterprises broadly use Workflow Management Systems (WfMSs) to design, deploy, execute, monitor and analyse their automated business processes. Through the years, WfMSs evolved into platforms that deliver complex service oriented applications. In this regard, they need to satisfy enterprise-grade performance requirements, such as dependability and scalability. With the ever-growing number of WfMSs that are currently available in the market, companies are called to choose which product is optimal for their requirements and business models. Benchmarking is an established practice used to compare alternative products and leverages the continuous improvement of technology by setting a clear target in measuring and assessing performance. In particular, for service oriented WfMSs there is not yet a widely accepted standard benchmark available, even if workflow modelling languages such as Web Services Business Process Execution Language (WS-BPEL) and Business Process Model and Notation 2.0 (BPMN 2.0) have been adopted as the de-facto standards. A possible explanation on this deficiency can be given by the inherent architectural complexity of WfMSs and the very large number of parameters affecting their performance. However, the need for a standard benchmark for WfMSs is frequently affirmed by the literature. The goal of the BenchFlow approach is to propose a framework towards the first standard benchmark forassessing and comparing the performance of BPMN 2.0 WfMSs. To this end, the approach addresses a set of challenges spanning from logistic challenges, that are related to the collection of a representative set of usage scenarios,to technical challenges, that concern the specific characteristics of a WfMS. This work focuses on a subset of these challenges dealing with the definition of a representative set of process models and corresponding data that will be given as an input to the benchmark. This set of representative process models and corresponding data are referred to as the workload mix of the benchmark. More particularly, we first prepare the theoretical background for defining a representative workload mix. This is accomplished through identification of the basic components of a workload model for WfMS benchmarks, as well as the investigation of the impact of the BPMN 2.0 language constructs to the WfMS’s performance, by means of introducing the first BPMN 2.0 micro-benchmark. We proceed by collecting real-world process models for the identification of a representative workload mix. Therefore, the collection is analysed with respect to its statistical characteristics and also with a novel algorithm that detects and extracts the reoccurring structural patterns of the collection.The extracted reoccurring structures are then used for generating synthetic process models that reflect the essence of the original collection.The introduced methods are brought together in a tool chain that supports the workload mix generation. As a final step, we applied the proposed methods on a real-world case study, that bases on a collection of thousands of real-world process models and generates a representative workload mix to be used in a benchmark. The results show that the generated workload mix is successful in its application for stressing the WfMSs under test

    Factors Influencing Customer Satisfaction towards E-shopping in Malaysia

    Get PDF
    Online shopping or e-shopping has changed the world of business and quite a few people have decided to work with these features. What their primary concerns precisely and the responses from the globalisation are the competency of incorporation while doing their businesses. E-shopping has also increased substantially in Malaysia in recent years. The rapid increase in the e-commerce industry in Malaysia has created the demand to emphasize on how to increase customer satisfaction while operating in the e-retailing environment. It is very important that customers are satisfied with the website, or else, they would not return. Therefore, a crucial fact to look into is that companies must ensure that their customers are satisfied with their purchases that are really essential from the ecommerce’s point of view. With is in mind, this study aimed at investigating customer satisfaction towards e-shopping in Malaysia. A total of 400 questionnaires were distributed among students randomly selected from various public and private universities located within Klang valley area. Total 369 questionnaires were returned, out of which 341 questionnaires were found usable for further analysis. Finally, SEM was employed to test the hypotheses. This study found that customer satisfaction towards e-shopping in Malaysia is to a great extent influenced by ease of use, trust, design of the website, online security and e-service quality. Finally, recommendations and future study direction is provided. Keywords: E-shopping, Customer satisfaction, Trust, Online security, E-service quality, Malaysia

    Reports to the President

    Get PDF
    A compilation of annual reports for the 1999-2000 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans

    The Democratization of Artificial Intelligence: Net Politics in the Era of Learning Algorithms

    Get PDF
    After a long time of neglect, Artificial Intelligence is once again at the center of most of our political, economic, and socio-cultural debates. Recent advances in the field of Artifical Neural Networks have led to a renaissance of dystopian and utopian speculations on an AI-rendered future. Algorithmic technologies are deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising or propaganda, and more generally for automating the analysis of language, text, and images. Against this background, the aim of this book is to discuss the heterogenous conditions, implications, and effects of modern AI and Internet technologies in terms of their political dimension: What does it mean to critically investigate efforts of net politics in the age of machine learning algorithms

    The Democratization of Artificial Intelligence

    Get PDF
    After a long time of neglect, Artificial Intelligence is once again at the center of most of our political, economic, and socio-cultural debates. Recent advances in the field of Artifical Neural Networks have led to a renaissance of dystopian and utopian speculations on an AI-rendered future. Algorithmic technologies are deployed for identifying potential terrorists through vast surveillance networks, for producing sentencing guidelines and recidivism risk profiles in criminal justice systems, for demographic and psychographic targeting of bodies for advertising or propaganda, and more generally for automating the analysis of language, text, and images. Against this background, the aim of this book is to discuss the heterogenous conditions, implications, and effects of modern AI and Internet technologies in terms of their political dimension: What does it mean to critically investigate efforts of net politics in the age of machine learning algorithms
    corecore