5,193 research outputs found
Planning as Optimization: Dynamically Discovering Optimal Configurations for Runtime Situations
The large number of possible configurations of modern software-based systems,
combined with the large number of possible environmental situations of such
systems, prohibits enumerating all adaptation options at design time and
necessitates planning at run time to dynamically identify an appropriate
configuration for a situation. While numerous planning techniques exist, they
typically assume a detailed state-based model of the system and that the
situations that warrant adaptations are known. Both of these assumptions can be
violated in complex, real-world systems. As a result, adaptation planning must
rely on simple models that capture what can be changed (input parameters) and
observed in the system and environment (output and context parameters). We
therefore propose planning as optimization: the use of optimization strategies
to discover optimal system configurations at runtime for each distinct
situation that is also dynamically identified at runtime. We apply our approach
to CrowdNav, an open-source traffic routing system with the characteristics of
a real-world system. We identify situations via clustering and conduct an
empirical study that compares Bayesian optimization and two types of
evolutionary optimization (NSGA-II and novelty search) in CrowdNav
The Strategic Balance of Centralized Control and Localized Flexibility in Two-Tier ERP Systems
Two-tier ERP systems are an increasingly popular technology strategy for large, multinational enterprises. This paper examines how two-tier ERP enables organizations to balance centralized control and coordination at the corporate level with localized flexibility and responsiveness at the division/subsidiary level. The tier 1 ERP system handles core tasks like HR, finance, and IT using highly customized solutions tailored to the large corporate entity's needs, scale, and sophistication. This promotes enterprise-wide process standardization and centralized control. Meanwhile, the tier 2 ERP systems utilized by smaller subsidiaries and regional offices are less resource intensive and more configurable to address localized requirements. Tier 2 gives local divisions more control over their ERP to enable flexibility and responsiveness. This research analyzes the key drivers pushing large multinationals towards two-tier ERP, including managing complexity across global operations, enabling centralized coordination while allowing localization, integrating dispersed IT infrastructures, and controlling implementation costs. The paper explores the unique characteristics and benefits of tier 1 and tier 2 ERP systems in depth, providing concrete examples. Critical considerations for successfully deploying two-tier ERP are also examined, such as integration, change management, and striking the right balance between standardization and localization. The conclusion reached is that two-tier ERP delivers important synergistic benefits for large enterprises through its centralized/decentralized dual structure. The tier 1/tier 2 approach balances the key needs for coordination and control at the center with flexibility at the edges. However, careful planning is required for effective two-tier ERP implementation. The optimal balance between standardization and localization must be struck to fully realize the strategic potential. This research provides important insights for both academic study and real-world application of two-tier ERP systems
Automated analysis of feature models: Quo vadis?
Feature models have been used since the 90's to describe software product lines as a way of reusing common parts in a family of software systems. In 2010, a systematic literature review was published summarizing the advances and settling the basis of the area of Automated Analysis of Feature Models (AAFM). From then on, different studies have applied the AAFM in different domains. In this paper, we provide an overview of the evolution of this field since 2010 by performing a systematic mapping study considering 423 primary sources. We found six different variability facets where the AAFM is being applied that define the tendencies: product configuration and derivation; testing and evolution; reverse engineering; multi-model variability-analysis; variability modelling and variability-intensive systems. We also confirmed that there is a lack of industrial evidence in most of the cases. Finally, we present where and when the papers have been published and who are the authors and institutions that are contributing to the field. We observed that the maturity is proven by the increment in the number of journals published along the years as well as the diversity of conferences and workshops where papers are published. We also suggest some synergies with other areas such as cloud or mobile computing among others that can motivate further research in the future.Ministerio de Economía y Competitividad TIN2015-70560-RJunta de Andalucía TIC-186
Recommended from our members
Improving Performance of M-to-N Processing and Data Redistribution in In Transit Analysis and Visualization
In an in transit setting, a parallel data producer, such as a numerical simulation, runs on one set of ranks M, while a data consumer, such as a parallel visualization application, runs on a different set of ranks N. One of the central challenges in this in transit setting is to determine the mapping of data from the set of M producer ranks to the set of N consumer ranks. This is a challenging problem for several reasons, such as the producer and consumer codes potentially having different scaling characteristics and different data models. The resulting mapping from M to N ranks can have a significant impact on aggregate application performance. In this work, we present an approach for performing this M-to-N mapping in a way that has broad applicability across a diversity of data producer and consumer applications. We evaluate its design and performance with
a study that runs at high concurrency on a modern HPC platform. By leveraging design characteristics, which facilitate an “intelligent” mapping from M-to-N, we observe significant performance gains are possible in terms of several different metrics, including time-to-solution and amount of data moved
- …