142,626 research outputs found

    Quality measures for ETL processes: from goals to implementation

    Get PDF
    Extraction transformation loading (ETL) processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of business process management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated, and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper, we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using goal modeling techniques. We showcase the use of goal modeling for ETL process design through a use case, where we employ the use of a goal model that includes quantitative components (i.e., indicators) for evaluation and analysis of alternative design decisions.Peer ReviewedPostprint (author's final draft

    Quality Measures for ETL Processes

    Get PDF
    ETL processes play an increasingly important role for the support of modern business operations. These business processes are centred around artifacts with high variability and diverse lifecycles, which correspond to key business entities. The apparent complexity of these activities has been examined through the prism of Business Process Management, mainly focusing on functional requirements and performance optimization. However, the quality dimension has not yet been thoroughly investigated and there is a need for a more human-centric approach to bring them closer to business-users requirements. In this paper we take a first step towards this direction by defining a sound model for ETL process quality characteristics and quantitative measures for each characteristic, based on existing literature. Our model shows dependencies among quality characteristics and can provide the basis for subsequent analysis using Goal Modeling techniques

    VIVACE: A Framework for the Systematic Evaluation of Variability Support in Process-Aware Information Systems

    Get PDF
    CONTEXT The increasing adoption of process-aware information systems (PAISs) such as workflow management systems, enterprise resource planning systems, or case management systems, together with the high variability in business processes (e.g., sales processes may vary depending on the respective products and countries), has resulted in large industrial process model repositories. To cope with this business process variability, the proper management of process variants along the entire process lifecycle becomes crucial. OBJECTIVE The goal of this paper is to develop a fundamental understanding of business process variability. In particular, the paper will provide a framework for assessing and comparing process variability approaches and the support they provide for the different phases of the business process lifecycle (i.e., process analysis and design, configuration, enactment, diagnosis, and evolution). METHOD We conducted a systematic literature review (SLR) in order to discover how process variability is supported by existing approaches. RESULTS The SLR resulted in 63 primary studies which were deeply analyzed. Based on this analysis, we derived the VIVACE framework. VIVACE allows assessing the expressiveness of a process modeling language regarding the explicit specification of process variability. Furthermore, the support provided by a process-aware information system to properly deal with process model variants can be assessed with VIVACE as well. CONCLUSIONS VIVACE provides an empirically-grounded framework for process engineers that enables them to evaluate existing process variability approaches as well as to select that variability approach meeting their requirements best. Finally, it helps process engineers in implementing PAISs supporting process variability along the entire process lifecycle

    Context-Aware Process Injection: Enhancing Process Flexibility by Late Extension of Process Instances

    Get PDF
    Companies must cope with high process variability and a strong demand for process flexibility due to customer expectations, product variability, and an abundance of regulations. Accordingly, numerous business process variants need to be supported depending on a multiplicity of influencing factors, e.g., customer requests, resource availability, compliance rules, or process data. In particular, even running processes should be adjustable to respond to contextual changes, new regulations, or emerging customer requests. This paper introduces the approach of context-aware process injection. It enables the sophisticated modeling of a context-aware injection of process fragments into a base process at design time, as well as the dynamic execution of the specified processes at run time. Therefore, the context-aware injection even considers dynamic wiring of data flow. To demonstrate the feasibility and benefits of the approach, a case study was conducted based on a proof-of-concept prototype developed with the help of an existing adaptive process management technology. Overall, context-aware process injection facilitates the specification of varying processes and provides high process flexibility at run time as well

    VIVACE: A framework for the systematic evaluation of variability support in process-aware information systems

    Full text link
    Context: The increasing adoption of process-aware information systems (PAISs) such as workflow management systems, enterprise resource planning systems, or case management systems, together with the high variability in business processes (e.g., sales processes may vary depending on the respective products and countries), has resulted in large industrial process model repositories. To cope with this business process variability, the proper management of process variants along the entire process lifecycle becomes crucial. Objective: The goal of this paper is to develop a fundamental understanding of business process variability. In particular, the paper will provide a framework for assessing and comparing process variability approaches and the support they provide for the different phases of the business process lifecycle (i.e., process analysis and design, configuration, enactment, diagnosis, and evolution). Method: We conducted a systematic literature review (SLR) in order to discover how process variability is supported by existing approaches. Results: The SLR resulted in 63 primary studies which were deeply analyzed. Based on this analysis, we derived the VIVACE framework. VIVACE allows assessing the expressiveness of a process modeling language regarding the explicit specification of process variability. Furthermore, the support provided by a process-aware information system to properly deal with process model variants can be assessed with VIVACE as well. Conclusions: VIVACE provides an empirically-grounded framework for process engineers that enables them to evaluate existing process variability approaches as well as to select that variability approach meeting their requirements best. Finally, it helps process engineers in implementing PAISs supporting process variability along the entire process lifecycle. (C) 2014 Elsevier B.V. All rights reserved.This work has been developed with the support of MICINN under the project EVERYWARE TIN2010-18011.Ayora Esteras, C.; Torres Bosch, MV.; Weber, B.; Reichert, M.; Pelechano Ferragud, V. (2015). VIVACE: A framework for the systematic evaluation of variability support in process-aware information systems. Information and Software Technology. 57:248-276. https://doi.org/10.1016/j.infsof.2014.05.009S2482765

    A Variability-Aware Design Approach to the Data Analysis Modeling Process

    Get PDF
    The massive amount of current data has led to many different forms of data analysis processes that aim to explore this data to uncover valuable insights such as trends, anomalies and patterns. These processes support decision makers in their analysis of varied and changing data ranging from financial transactions to customer interactions and social network postings. These data analysis processes use a wide variety of methods, including machine learning, in several domains such as business, finance, health and smart cities. Several data analysis processes have been proposed by academia and industry, including CRISP-DM and SEMMA, to describe the phases that data analysis experts go through when solving their problems. Specifically, CRISP-DM has modeling as one of its phases, which involves selecting a modeling technique, generating a test design, building a model, and assessing the model. However, automating these data analysis modeling processes faces numerous challenges, from a software engineering perspective. First, software users expect increased flexibility from the software as to the possible variations in techniques, types of data, and parameter settings. The software is required to accommodate complex usage and deployment variations, which are difficult for non-experts. Second, variability in functionality or quality attributes increases the complexity of these systems and makes them harder to design and implement. There is a lack of a framework design that takes variability into account. Third, the lack of a more comprehensive analysis of variability makes it difficult to evaluate opportunities for automating data analysis modeling. This thesis proposes a variability-aware design approach to the data analysis modeling process. The approach involves: (i) the assessment of the variabilities inherent in CRISP-DM data analysis modeling and the provision of feature models that represent these variabilities; (ii) the definition of a preliminary framework design that captures the identified variabilities; and (iii) evaluation of the framework design in terms of possibilities of automation. Overall, this work presents, to the best of our knowledge, the first approach based on variability assessment to design data modeling process such as CRISP-DM. The approach advances the state of the art by offering a variability-aware design a solution that can enhance system flexibility and a novel software design framework to support data analysis modeling

    Generating optimized configurable business process models in scenarios subject to uncertainty

    Get PDF
    Context: The quality of business process models (i.e., software artifacts that capture the relations between the organizational units of a business) is essential for enhancing the management of business processes. However, such modeling is typically carried out manually. This is already challenging and time consuming when (1) input uncertainty exists, (2) activities are related, and (3) resource allocation has to be considered. When including optimization requirements regarding flexibility and robustness it becomes even more complicated potentially resulting into non-optimized models, errors, and lack of flexibility. Objective: To facilitate the human work and to improve the resulting models in scenarios subject to uncertainty, we propose a software-supported approach for automatically creating configurable business process models from declarative specifications considering all the aforementioned requirements. Method: First, the scenario is modeled through a declarative language which allows the analysts to specify its variability and uncertainty. Thereafter, a set of optimized enactment plans (each one representing a potential execution alternative) are generated from such a model considering the input uncertainty. Finally, to deal with this uncertainty during run-time, a flexible configurable business process model is created from these plans. Results: To validate the proposed approach, we conduct a case study based on a real business which is subject to uncertainty. Results indicate that our approach improves the actual performance of the business and that the generated models support most of the uncertainty inherent to the business. Conclusions: The proposed approach automatically selects the best part of the variability of a declarative specification. Unlike existing approaches, our approach considers input uncertainty, the optimization of multiple objective functions, as well as the resource and the control-flow perspectives. However, our approach also presents a few limitations: (1) it is focused on the control-flow and the data perspective is only partially addressed and (2) model attributes need to be estimated.Ministerio de Ciencia e Innovación TIN2009-1371

    Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Full text link
    We present a graphical and dynamic framework for binding and execution of business) process models. It is tailored to integrate 1) ad hoc processes modeled graphically, 2) third party services discovered in the (Inter)net, and 3) (dynamically) synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component) processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business) process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS). The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Composition and Self-Adaptation of Service-Based Systems with Feature Models

    Get PDF
    The adoption of mechanisms for reusing software in pervasive systems has not yet become standard practice. This is because the use of pre-existing software requires the selection, composition and adaptation of prefabricated software parts, as well as the management of some complex problems such as guaranteeing high levels of efficiency and safety in critical domains. In addition to the wide variety of services, pervasive systems are composed of many networked heterogeneous devices with embedded software. In this work, we promote the safe reuse of services in service-based systems using two complementary technologies, Service-Oriented Architecture and Software Product Lines. In order to do this, we extend both the service discovery and composition processes defined in the DAMASCo framework, which currently does not deal with the service variability that constitutes pervasive systems. We use feature models to represent the variability and to self-adapt the services during the composition in a safe way taking context changes into consideration. We illustrate our proposal with a case study related to the driving domain of an Intelligent Transportation System, handling the context information of the environment.Work partially supported by the projects TIN2008-05932, TIN2008-01942, TIN2012-35669, TIN2012-34840 and CSD2007-0004 funded by Spanish Ministry of Economy and Competitiveness and FEDER; P09-TIC-05231 and P11-TIC-7659 funded by Andalusian Government; and FP7-317731 funded by EU. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Managing Process Variants in the Process Life Cycle

    Get PDF
    When designing process-aware information systems, often variants of the same process have to be specified. Each variant then constitutes an adjustment of a particular process to specific requirements building the process context. Current Business Process Management (BPM) tools do not adequately support the management of process variants. Usually, the variants have to be kept in separate process models. This leads to huge modeling and maintenance efforts. In particular, more fundamental process changes (e.g., changes of legal regulations) often require the adjustment of all process variants derived from the same process; i.e., the variants have to be adapted separately to meet the new requirements. This redundancy in modeling and adapting process variants is both time consuming and error-prone. This paper presents the Provop approach, which provides a more flexible solution for managing process variants in the process life cycle. In particular, process variants can be configured out of a basic process following an operational approach; i.e., a specific variant is derived from the basic process by applying a set of well-defined change operations to it. Provop provides full process life cycle support and allows for flexible process configuration resulting in a maintainable collection of process variants
    corecore