971 research outputs found

    Context-Aware Querying and Injection of Process Fragments in Process-Aware Information Systems

    Get PDF
    Cyber-physical systems (CPS) are often customized to meet customer needs and, hence, exhibit a large number of hard-/software configuration variants. Consequently, the processes deployed on a CPS need to be configured to the respective CPS variant. This includes both configuration at design time (i.e., before deploying the implemented processes on the CPS) and runtime configuration taking the current context of the CPS into account. Such runtime process configuration is by far not trivial, e.g., alternative process fragments may have to be selected at certain points during process execution of which one fragment is then dynamically applied to the process at hand. Contemporary approaches focus on the design time configuration of processes, while neglecting runtime configuration to cope with process variability. In this paper, a generic approach enabling context-aware process configuration at runtime is presented. With the Process Query Language process fragments can be flexibly selected from a process repository, and then be dynamically injected into running process instances depending on the respective contextual situations. The latter can be automatically derived from context factors, e.g., sensor data or configuration parameters of the given CPS. Altogether, the presented approach allows for a flexible configuration and late composition of process instances at runtime, as required in many application domains and scenarios

    On the support of multi-perspective process models variability for smart environments

    Get PDF
    Cloud service-based applications are to be adapted to serve multiple platforms and stakeholders. Atop of such services, Smart Green Buildings are fostering a plethora of processes within their sustainability life-cycle. This introduces a number of challenges, as how to support multiple perspectives of domain-specific variability and how to deal with large collections of related process variants. To tackle this, there is a need to handle multiperspective variability for processes. This paper introduces an approach to manage multi-perspective process variability by means of a meta-model and a modeling methodology, representing separately people and things variability perspectives in smart environments. Initial experimental results are also described, which indicate encouraging results for managing highly complex variability models

    Feature-based configuration management of reconfigurable cloud applications

    Get PDF
    A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality

    Unified GUI adaptation in Dynamic Software Product Lines

    Get PDF
    In the modern world of mobile computing and ubiquitous technology, society is able to interact with technology in new and fascinating ways. To help provide an improved user experience, mobile software should be able to adapt itself to suit the user. By monitoring context information based on the environment and user, the application can better meet the dynamic requirements of the user. Similarly, it is noticeable that programs can require different static changes to suit static requirements. This program commonality and variability can benefit from the use of Software Product Line Engineering, reusing artefacts over a set of similar programs, called a Software Product Line (SPL). Historically, SPLs are limited to handling static compile time adaptations. Dynamic Software Product Lines (DSPL) however, allow for the program configuration to change at runtime, allow for compile time and runtime adaptation to be developed in a single unified approach. While currently DSPLs provide methods for dealing with program logic adaptations, variability in the Graphical User Interface (GUI) has largely been neglected. Due to this, depending on the intended time to apply GUI adaptation, different approaches are required. The main goal of this work is to extend a unified representation of variability to the GUI, whereby GUI adaptation can be applied at compile time and at runtime. In this thesis, an approach to handling GUI adaptation within DSPLs, providing a unified representation of GUI variability is presented. The approach is based on Feature-Oriented Programming (FOP), enabling developers to implement GUI adaptation along with program logic in feature modules. This approach is applied to Document-Oriented GUIs, also known as GUI description languages. In addition to GUI unification, we present an approach to unifying context and feature modelling, and handling context dynamically at runtime, as features of the DSPL. This unification can allow for more dynamic and self-aware context acquisition. To validate our approach, we implemented tool support and middleware prototypes. These different artefacts are then tested using a combination of scenarios and scalability tests. This combination first helps demonstrate the versatility and its relevance of the different approach aspects. It further brings insight into how the approach scales with DSPL size

    Model-based Quality Assurance of Cyber-Physical Systems with Variability in Space, over Time and at Runtime

    Get PDF
    Cyber-physical systems (CPS) are frequently characterized by three essential properties: CPS perform complex computations, CPS conduct control tasks involving continuous data- and signal-processing, and CPS are (parts of) distributed, and even mobile, communication systems. In addition, modern software systems like CPS have to cope with ever-growing extents of variability, namely variability in space by means of predefined configuration options (e.g., software product lines), variability at runtime by means of preplanned reconfigurations (e.g., runtime-adaptive systems), and variability over time by means of initially unforeseen updates to new versions (e.g., software evolution). Finally, depending on the particular application domain, CPS often constitute safety- and mission-critical parts of socio-technical systems. Thus, novel quality-assurance methodologies are required to systematically cope with the interplay between the different CPS characteristics on the one hand, and the different dimensions of variability on the other hand. This thesis gives an overview on recent research and open challenges in model-based specification and quality-assurance of CPS in the presence of variability. The main focus of this thesis is laid on computation and communication aspects of CPS, utilizing evolving dynamic software product lines as engineering methodology and model-based testing as quality-assurance technique. The research is illustrated and evaluated by means of case studies from different application domains

    Towards Context-aware Process Guidance in Cyber-Physical Systems with Augmented Reality

    Get PDF
    Assembly, configuration, maintenance, and repair processes in cyber-physical systems (e.g., a press line in a plant) comprise a multitude of complex tasks, whose execution needs to be controlled, coordinated and monitored. Amongst others, a process-centric guidance of users (e.g. service operators) is required, taking the high variability in the assembly of cyber-physical systems (e.g. press line variability) into account. Moreover, the tasks to be performed along these processes may be related to physical components, sensors and actuators, which need to be properly recognized, integrated and operated. In order to digitize cyber-physical processes as well as to guide users in a process-centric way, therefore, we suggest integrating process management technology, sensor/actuator interfaces, and augmented reality techniques. The paper discusses fundamental requirements for such an integration and presents an approach for process-centric user guidance that combines context and process management with augmented reality enhanced tasks. For evaluation purposes, we analyzed the cyber-physical processes of pharmaceutical packaging machines and implemented selected ones based on the approach. Overall, we are able to demonstrate the usefulness of context-aware process management for the flexible support of cyber-physical processes in the Industrial Internet of Things

    Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation

    Full text link
    TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this paper, we provide an in-depth performance characterization and analysis of these distributed training approaches on various GPU clusters including the Piz Daint system (6 on Top500). We perform experiments to gain novel insights along the following vectors: 1) Application-level scalability of DNN training, 2) Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on these experiments, we present two key insights: 1) Overall, No-gRPC designs achieve better performance compared to gRPC-based approaches for most configurations, and 2) The performance of No-gRPC is heavily influenced by the gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware MPI Allreduce design that exploits CUDA kernels and pointer caching to perform large reductions efficiently. Our proposed designs offer 5-17X better performance than NCCL2 for small and medium messages, and reduces latency by 29% for large messages. The proposed optimizations help Horovod-MPI to achieve approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs. Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint cluster.Comment: 10 pages, 9 figures, submitted to IEEE IPDPS 2019 for peer-revie

    Automated analysis of feature models: Quo vadis?

    Get PDF
    Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186
    corecore