32,668 research outputs found

    Managing Process Variants in the Process Life Cycle

    Get PDF
    When designing process-aware information systems, often variants of the same process have to be specified. Each variant then constitutes an adjustment of a particular process to specific requirements building the process context. Current Business Process Management (BPM) tools do not adequately support the management of process variants. Usually, the variants have to be kept in separate process models. This leads to huge modeling and maintenance efforts. In particular, more fundamental process changes (e.g., changes of legal regulations) often require the adjustment of all process variants derived from the same process; i.e., the variants have to be adapted separately to meet the new requirements. This redundancy in modeling and adapting process variants is both time consuming and error-prone. This paper presents the Provop approach, which provides a more flexible solution for managing process variants in the process life cycle. In particular, process variants can be configured out of a basic process following an operational approach; i.e., a specific variant is derived from the basic process by applying a set of well-defined change operations to it. Provop provides full process life cycle support and allows for flexible process configuration resulting in a maintainable collection of process variants

    Reducing the Barrier to Entry of Complex Robotic Software: a MoveIt! Case Study

    Full text link
    Developing robot agnostic software frameworks involves synthesizing the disparate fields of robotic theory and software engineering while simultaneously accounting for a large variability in hardware designs and control paradigms. As the capabilities of robotic software frameworks increase, the setup difficulty and learning curve for new users also increase. If the entry barriers for configuring and using the software on robots is too high, even the most powerful of frameworks are useless. A growing need exists in robotic software engineering to aid users in getting started with, and customizing, the software framework as necessary for particular robotic applications. In this paper a case study is presented for the best practices found for lowering the barrier of entry in the MoveIt! framework, an open-source tool for mobile manipulation in ROS, that allows users to 1) quickly get basic motion planning functionality with minimal initial setup, 2) automate its configuration and optimization, and 3) easily customize its components. A graphical interface that assists the user in configuring MoveIt! is the cornerstone of our approach, coupled with the use of an existing standardized robot model for input, automatically generated robot-specific configuration files, and a plugin-based architecture for extensibility. These best practices are summarized into a set of barrier to entry design principles applicable to other robotic software. The approaches for lowering the entry barrier are evaluated by usage statistics, a user survey, and compared against our design objectives for their effectiveness to users

    Cloud WorkBench - Infrastructure-as-Code Based Cloud Benchmarking

    Full text link
    To optimally deploy their applications, users of Infrastructure-as-a-Service clouds are required to evaluate the costs and performance of different combinations of cloud configurations to find out which combination provides the best service level for their specific application. Unfortunately, benchmarking cloud services is cumbersome and error-prone. In this paper, we propose an architecture and concrete implementation of a cloud benchmarking Web service, which fosters the definition of reusable and representative benchmarks. In distinction to existing work, our system is based on the notion of Infrastructure-as-Code, which is a state of the art concept to define IT infrastructure in a reproducible, well-defined, and testable way. We demonstrate our system based on an illustrative case study, in which we measure and compare the disk IO speeds of different instance and storage types in Amazon EC2

    Variability and Evolution in Systems of Systems

    Full text link
    In this position paper (1) we discuss two particular aspects of Systems of Systems, i.e., variability and evolution. (2) We argue that concepts from Product Line Engineering and Software Evolution are relevant to Systems of Systems Engineering. (3) Conversely, concepts from Systems of Systems Engineering can be helpful in Product Line Engineering and Software Evolution. Hence, we argue that an exchange of concepts between the disciplines would be beneficial.Comment: In Proceedings AiSoS 2013, arXiv:1311.319

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186

    PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies

    Full text link
    The current landscape of scientific research is widely based on modeling and simulation, typically with complexity in the simulation's flow of execution and parameterization properties. Execution flows are not necessarily straightforward since they may need multiple processing tasks and iterations. Furthermore, parameter and performance studies are common approaches used to characterize a simulation, often requiring traversal of a large parameter space. High-performance computers offer practical resources at the expense of users handling the setup, submission, and management of jobs. This work presents the design of PaPaS, a portable, lightweight, and generic workflow framework for conducting parallel parameter and performance studies. Workflows are defined using parameter files based on keyword-value pairs syntax, thus removing from the user the overhead of creating complex scripts to manage the workflow. A parameter set consists of any combination of environment variables, files, partial file contents, and command line arguments. PaPaS is being developed in Python 3 with support for distributed parallelization using SSH, batch systems, and C++ MPI. The PaPaS framework will run as user processes, and can be used in single/multi-node and multi-tenant computing systems. An example simulation using the BehaviorSpace tool from NetLogo and a matrix multiply using OpenMP are presented as parameter and performance studies, respectively. The results demonstrate that the PaPaS framework offers a simple method for defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Exploring the spectroscopic diversity of type Ia supernovae with DRACULA: a machine learning approach

    Get PDF
    The existence of multiple subclasses of type Ia supernovae (SNeIa) has been the subject of great debate in the last decade. One major challenge inevitably met when trying to infer the existence of one or more subclasses is the time consuming, and subjective, process of subclass definition. In this work, we show how machine learning tools facilitate identification of subtypes of SNeIa through the establishment of a hierarchical group structure in the continuous space of spectral diversity formed by these objects. Using Deep Learning, we were capable of performing such identification in a 4 dimensional feature space (+1 for time evolution), while the standard Principal Component Analysis barely achieves similar results using 15 principal components. This is evidence that the progenitor system and the explosion mechanism can be described by a small number of initial physical parameters. As a proof of concept, we show that our results are in close agreement with a previously suggested classification scheme and that our proposed method can grasp the main spectral features behind the definition of such subtypes. This allows the confirmation of the velocity of lines as a first order effect in the determination of SNIa subtypes, followed by 91bg-like events. Given the expected data deluge in the forthcoming years, our proposed approach is essential to allow a quick and statistically coherent identification of SNeIa subtypes (and outliers). All tools used in this work were made publicly available in the Python package Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy (DRACULA) and can be found within COINtoolbox (https://github.com/COINtoolbox/DRACULA).Comment: 16 pages, 12 figures, accepted for publication in MNRA

    Modeling Business Process Variability

    Get PDF
    This master thesis presents research findings on business process variability modeling. Its main goal is to analyze inherent problems of business process variability and solve them simply, innovatively and effectively. To achieve this goal, process variability is defined by analyzing scientific literature, its main problems identified and is illustrated using a healthcare running example: process variability is classified into process variability within the domain space and over time. These two forms of process variability respectively lead to process variability modeling and process model evolution problems. After defining the main problems inherent to process variability, the focus of this research project is defined: solving process variability modeling problems. First current business process modeling languages are evaluated to assess the effectiveness of their respective modeling concepts when modeling process variability, using a newly created set of evaluation criteria and the healthcare running example. The following business process modeling languages are evaluated: Event driven process chains (EPC), the Business Process Modeling Notation (BPMN) and Configurable EPC (C-EPC). Business process variability modeling and Software product line engineering have similar problems. Therefore the variability modeling concepts developed by software product line engineering are analyzed. Feature diagrams and software configuration management are the main variability management concepts provided by software product line engineering. To apply these variability management concepts to model process variability meant combining them with existing business modeling languages. Riebisch feature diagrams are combined with C-EPC to form Feature-EPC. Applying software configuration management, meant merging Change Oriented Versioning with basic EPC to create COV-EPC, and merging the Proteus Configuration Language with basic EPC to design PCL-EPC. Finally these newly created business process modeling languages are also evaluated using the newly designed evaluation criteria and the healthcare running example. EPC or BPMN are not suited to model business process variability within the domain space. C-EPC provide explicit means to model business process variability, however the process models tend to get big very fast. Furthermore the syntax, the contextual constraints and the semantics of the configuration requirements and guidelines used to configure the C-EPC process models are unclear. Feature-EPC improve C-EPC with domain modeling capability and clearly defined configuration rules: their syntax, contextual constraints and semantics have been clearly defined using a context free grammar in Backus-Naur form. Furthermore, consistent combinations of features and configuration rules are ensured using respectively constraints and a conflict resolution algorithm. However, Feature-EPC and C-EPC suffer from the same weakness: large configurable process models. In COV-EPC and PCL-EPC the problem of large configurable process models is solved. COV-EPC ensures consistent combinations of options and configuration rules using respectively validities and a conflict resolution algorithm. PCL-EPC guarantees consistent combinations of process fragments by means of a PCL specification
    corecore