275 research outputs found

    A Framework for BPMS Performance and Cost Evaluation on the Cloud

    Get PDF
    International audienceIn this paper, we describe a framework that allows to automate and repeat business process execution on different cloud configurations. We present how and why the different components of the experimentation pipeline-like Ansible, Docker and Jenkins-have been set up, and the kind of results we obtained on a large set of configurations from the AWS public cloud. It allows us to calculate actual prices regarding the cost of process execution, in order to compare not only pure performance but also the economic dimension of process execution

    Literature Survey of Performance Benchmarking Approaches of BPEL Engines

    Get PDF
    Despite the popularity of BPEL engines to orchestrate complex and executable processes, there are still only few approaches available to help find the most appropriate engine for individual requirements. One of the more crucial factors for such a middleware product in industry are the performance characteristics of a BPEL engine. There exist multiple studies in industry and academia testing the performance of BPEL engines, which differ in focus and method. We aim to compare the methods used in these approaches and provide guidance for further research in this area. Based on the related work in the field of performance testing, we created a process engine specific comparison framework, which we used to evaluate and classify nine different approaches that were found using the method of a systematical literature survey. With the results of the status quo analysis in mind, we derived directions for further research in this area

    Improving the Guest Experience through Service Innovation: Ideas and Principles for the Hospitality Industry

    Get PDF
    Innovation is the process of developing new ideas or processes, or taking existing ideas and processes in new directions. An innovative idea or process does not have to involve a bolt from the blue, but it almost always involves at least a twist on current operations. Meeting at Cornell’s School of Hotel Innovation, a group of two dozen service researchers and practitioners gathered in May 2011 to examine the latest concepts in service, with a goal of sharing innovative ideas and processes, and expanding a culture of innovation in the hospitality industry

    Portability of Process-Aware and Service-Oriented Software: Evidence and Metrics

    Get PDF
    Modern software systems are becoming increasingly integrated and are required to operate over organizational boundaries through networks. The development of such distributed software systems has been shaped by the orthogonal trends of service-orientation and process-awareness. These trends put an emphasis on technological neutrality, loose coupling, independence from the execution platform, and location transparency. Execution platforms supporting these trends provide context and cross-cutting functionality to applications and are referred to as engines. Applications and engines interface via language standards. The engine implements a standard. If an application is implemented in conformance to this standard, it can be executed on the engine. A primary motivation for the usage of standards is the portability of applications. Portability, the ability to move software among different execution platforms without the necessity for full or partial reengineering, protects from vendor lock-in and enables application migration to newer engines. The arrival of cloud computing has made it easy to provision new and scalable execution platforms. To enable easy platform changes, existing international standards for implementing service-oriented and process-aware software name the portability of standardized artifacts as an important goal. Moreover, they provide platform-independent serialization formats that enable the portable implementation of applications. Nevertheless, practice shows that service-oriented and process-aware applications today are limited with respect to their portability. The reason for this is that engines rarely implement a complete standard, but leave out parts or differ in the interpretation of the standard. As a consequence, even applications that claim to be portable by conforming to a standard might not be so. This thesis contributes to the development of portable service-oriented and process-aware software in two ways: Firstly, it provides evidence for the existence of portability issues and the insufficiency of standards for guaranteeing software portability. Secondly, it derives and validates a novel measurement framework for quantifying portability. We present a methodology for benchmarking the conformance of engines to a language standard and implement it in a fully automated benchmarking tool. Several test suites of conformance tests for two different languages, the Web Services Business Process Execution Language 2.0 and the Business Process Model and Notation 2.0, allow to uncover a variety of standard conformance issues in existing engines. This provides evidence that the standard-based portability of applications is a real issue. Based on these results, this thesis derives a measurement framework for portability. The framework is aligned to the ISO/IEC Systems and software Quality Requirements and Evaluation method, the recent revision of the renowned ISO/IEC software quality model and measurement methodology. This quality model separates the software quality characteristic of portability into the subcharacteristics of installability, adaptability, and replaceability. Each of these characteristics forms one part of the measurement framework. This thesis targets each characteristic with a separate analysis, metrics derivation, evaluation, and validation. We discuss existing metrics from the body of literature and derive new extensions speciffically tailored to the evaluation of service-oriented and process-aware software. Proposed metrics are defined formally and validated theoretically using an informal and a formal validation framework. Furthermore, the computation of the metrics has been prototypically implemented. This implementation is used to evaluate metrics performance in experiments based on large scale software libraries obtained from public open source software repositories. In summary, this thesis provides evidence that contemporary standards and their implementations are not sufficient for enabling the portability of process-aware and service-oriented applications. Furthermore, it proposes, validates, and practically evaluates a framework for measuring portability

    Taking Computation to Data: Integrating Privacy-preserving AI techniques and Blockchain Allowing Secure Analysis of Sensitive Data on Premise

    Get PDF
    PhD thesis in Information technologyWith the advancement of artificial intelligence (AI), digital pathology has seen significant progress in recent years. However, the use of medical AI raises concerns about patient data privacy. The CLARIFY project is a research project funded under the European Union’s Marie Sklodowska-Curie Actions (MSCA) program. The primary objective of CLARIFY is to create a reliable, automated digital diagnostic platform that utilizes cloud-based data algorithms and artificial intelligence to enable interpretation and diagnosis of wholeslide-images (WSI) from any location, maximizing the advantages of AI-based digital pathology. My research as an early stage researcher for the CLARIFY project centers on securing information systems using machine learning and access control techniques. To achieve this goal, I extensively researched privacy protection technologies such as federated learning, differential privacy, dataset distillation, and blockchain. These technologies have different priorities in terms of privacy, computational efficiency, and usability. Therefore, we designed a computing system that supports different levels of privacy security, based on the concept: taking computation to data. Our approach is based on two design principles. First, when external users need to access internal data, a robust access control mechanism must be established to limit unauthorized access. Second, it implies that raw data should be processed to ensure privacy and security. Specifically, we use smart contractbased access control and decentralized identity technology at the system security boundary to ensure the flexibility and immutability of verification. If the user’s raw data still cannot be directly accessed, we propose to use dataset distillation technology to filter out privacy, or use locally trained model as data agent. Our research focuses on improving the usability of these methods, and this thesis serves as a demonstration of current privacy-preserving and secure computing technologies

    Workload mix definition for benchmarking BPMN 2.0 Workflow Management Systems

    Get PDF
    Nowadays, enterprises broadly use Workflow Management Systems (WfMSs) to design, deploy, execute, monitor and analyse their automated business processes. Through the years, WfMSs evolved into platforms that deliver complex service oriented applications. In this regard, they need to satisfy enterprise-grade performance requirements, such as dependability and scalability. With the ever-growing number of WfMSs that are currently available in the market, companies are called to choose which product is optimal for their requirements and business models. Benchmarking is an established practice used to compare alternative products and leverages the continuous improvement of technology by setting a clear target in measuring and assessing performance. In particular, for service oriented WfMSs there is not yet a widely accepted standard benchmark available, even if workflow modelling languages such as Web Services Business Process Execution Language (WS-BPEL) and Business Process Model and Notation 2.0 (BPMN 2.0) have been adopted as the de-facto standards. A possible explanation on this deficiency can be given by the inherent architectural complexity of WfMSs and the very large number of parameters affecting their performance. However, the need for a standard benchmark for WfMSs is frequently affirmed by the literature. The goal of the BenchFlow approach is to propose a framework towards the first standard benchmark forassessing and comparing the performance of BPMN 2.0 WfMSs. To this end, the approach addresses a set of challenges spanning from logistic challenges, that are related to the collection of a representative set of usage scenarios,to technical challenges, that concern the specific characteristics of a WfMS. This work focuses on a subset of these challenges dealing with the definition of a representative set of process models and corresponding data that will be given as an input to the benchmark. This set of representative process models and corresponding data are referred to as the workload mix of the benchmark. More particularly, we first prepare the theoretical background for defining a representative workload mix. This is accomplished through identification of the basic components of a workload model for WfMS benchmarks, as well as the investigation of the impact of the BPMN 2.0 language constructs to the WfMS’s performance, by means of introducing the first BPMN 2.0 micro-benchmark. We proceed by collecting real-world process models for the identification of a representative workload mix. Therefore, the collection is analysed with respect to its statistical characteristics and also with a novel algorithm that detects and extracts the reoccurring structural patterns of the collection.The extracted reoccurring structures are then used for generating synthetic process models that reflect the essence of the original collection.The introduced methods are brought together in a tool chain that supports the workload mix generation. As a final step, we applied the proposed methods on a real-world case study, that bases on a collection of thousands of real-world process models and generates a representative workload mix to be used in a benchmark. The results show that the generated workload mix is successful in its application for stressing the WfMSs under test

    The future of animal feeding: towards sustainable precision livestock farming

    Get PDF
    In the future, production will increasingly be affected by globalization of the trade in feed commodities and livestock products, competition for natural resources, particularly land and water, competition between feed, food and biofuel, and by the need to operate in a carbonconstrained economy, says Nutreco’s Dr. Leo den Hartog. Moreover, he suggests, livestock production will be increasingly affected by consumer and societal concerns and legislation. A way forward in the development of profi table modern pig production will be the concept of sustainable precision livestock farming, den Hartog believes. This aims to integrate the technological approach of precision livestock farming with the social and ecological aspects. Optimization of productivity and effi ciency will play a crucial role, as well as maximization of the profi t for all stakeholders in the pork chain, he says. He discusses the necessity for and rationale behind the concept, with a special focus on animal feedin

    The Intersection of Hospitality and Healthcare: Exploring Common Areas of Service Quality, Human Resources, and Marketing

    Get PDF
    Within the context of providing high quality clinical outcomes, managers in the U.S. healthcare system are working hard to solve several problems, including the challenging and interrelated problems of how to control operating costs, how to improve employee retention, and how to satisfy customers and stakeholders. Beyond that, the industry faces substantial capital expenses when constructing new facilities and renovating or maintaining existing aging structures. In short, many of the issues facing the healthcare system are similar to those of the hospitality industry

    IFPRI Annual Report 2007-2008:

    Get PDF
    Food prices, Poverty reduction, Globalization, Food security Developing countries, Agricultural systems, trade, Markets, Natural resources, World food situation, Social protection, science and technology, Nutrition, Capacity strengthening,

    Driving a lean transformation using a six sigma improvement process

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.Includes bibliographical references (p. 73-74).Successive transformations within manufacturing have brought great efficiencies to producers and lower costs to consumers. With the advents of interchangeable parts between 1800 and 1850 in small arms manufacturing (Hounshell, 1984, pp. 3-4), mass production in the early 1900s in automobile manufacturing (Hounshell, 1984, pp. 9-10), and lean production in the early 1950s in automobile manufacturing (Womack, Jones, & Roos, 1990, p. 52), the state of manufacturing has continued to evolve. Each time, the visionaries that catalyzed the transformations were forced to overcome the inertia of the status quo. After convincing stakeholders of the need for change, these change agents: 1. Established a vision for the future 2. Committed resources to attain that vision 3. Studied the root causes for current methods 4. Proposed a new solution 5. Implemented the new solution 6. Quantified the results and sought future improvements. This basic process to implementing change is remarkably simple yet incredibly powerful. By explicitly emphasizing the need for root cause analysis, the process recognizes that improvements will be transient if the root causes of prior problems are not fully understood and resolved. When deploying a lean production system, an understanding of lean principles and tools is necessary but therefore not sufficient. Rather, implementing a lean production system should follow: 1. An analysis mapping the root causes of current production methods back to technical issues and the organization's strategic design, culture, and political landscape. Only by fixing the problems that led to the current production system can a lean transformation be sustained. 2. A detailed plan which achieves a transformation in both the organization(cont.) production system.by Satish Krishnan.S.M.M.B.A
    corecore