58 research outputs found

    Envisioning Model-Based Performance Engineering Frameworks.

    Get PDF
    Abstract Our daily activities depend on complex software systems that must guarantee certain performance. Several approaches have been devised in the last decade to validate software systems against performance requirements. However, software designers still encounter problems in the interpretation of performance analysis results (e.g., mean values, probability distribution functions) and in the definition of design alternatives (e.g., to split a software component in two and redeploy one of them) aimed at fulfilling performance requirements. This paper describes a general model-based performance engineering framework to support designers in dealing with such problems aimed at enhancing the system. The framework relies on a formalization of the knowledge needed in order to characterize performance flaws and provide alternative system design. Such knowledge can be instantiated based on the techniques devised for interpreting performance analysis results and providing feedback to designers. Three techniques are considered in this paper for instantiating the framework and the main challenges to face during such process are pointed out and discussed

    Model-based resource analysis and synthesis of service-oriented automotive software architectures

    Get PDF
    Context Automotive software architectures describe distributed functionality by an interaction of software components. One drawback of today\u27s architectures is their strong integration into the onboard communication network based on predefined dependencies at design time. The idea is to reduce this rigid integration and technological dependencies. To this end, service-oriented architecture offers a suitable methodology since network communication is dynamically established at run-time. Aim We target to provide a methodology for analysing hardware resources and synthesising automotive service-oriented architectures based on platform-independent service models. Subsequently, we focus on transforming these models into a platform-specific architecture realisation process following AUTOSAR Adaptive. Approach For the platform-independent part, we apply the concepts of design space exploration and simulation to analyse and synthesise deployment configurations, i. e., mapping services to hardware resources at an early development stage. We refine these configurations to AUTOSAR Adaptive software architecture models representing the necessary input for a subsequent implementation process for the platform-specific part. Result We present deployment configurations that are optimal for the usage of a given set of computing resources currently under consideration for our next generation of E/E architecture. We also provide simulation results that demonstrate the ability of these configurations to meet the run time requirements. Both results helped us to decide whether a particular configuration can be implemented. As a possible software toolchain for this purpose, we finally provide a prototype. Conclusion The use of models and their analysis are proper means to get there, but the quality and speed of development must also be considered

    Introducing Interactions in Multi-Objective Optimization of Software Architectures

    Full text link
    Software architecture optimization aims to enhance non-functional attributes like performance and reliability while meeting functional requirements. Multi-objective optimization employs metaheuristic search techniques, such as genetic algorithms, to explore feasible architectural changes and propose alternatives to designers. However, the resource-intensive process may not always align with practical constraints. This study investigates the impact of designer interactions on multi-objective software architecture optimization. Designers can intervene at intermediate points in the fully automated optimization process, making choices that guide exploration towards more desirable solutions. We compare this interactive approach with the fully automated optimization process, which serves as the baseline. The findings demonstrate that designer interactions lead to a more focused solution space, resulting in improved architectural quality. By directing the search towards regions of interest, the interaction uncovers architectures that remain unexplored in the fully automated process

    Blocked-based Solidity — a Service for Graphically Creating the Smart Contracts in Solidity Programming Language

    Get PDF
    In the last few years, we can observe a constantly increasing interest in systems and applications based on blockchain technology. Undoubtedly, this fact was significantly influenced by the introduction of the smart contracts mechanism that is one of the most popular features of blockchain nowadays and can be used across almost any industry. Smart contracts are programs stored on a blockchain that run when predetermined conditions are met. Since programming smart contracts is not trivial, this paper proposes a service that enables their creation by constructing diagrams from graphical blocks. The diagrams are then transformed into a smart contract code written in the Solidity language. The paper presents the general idea of the proposed service and selected use cases illustrating its application

    Many-Objective Optimization of Non-Functional Attributes based on Refactoring of Software Models

    Full text link
    Software quality estimation is a challenging and time-consuming activity, and models are crucial to face the complexity of such activity on modern software applications. In this context, software refactoring is a crucial activity within development life-cycles where requirements and functionalities rapidly evolve. One main challenge is that the improvement of distinctive quality attributes may require contrasting refactoring actions on software, as for trade-off between performance and reliability (or other non-functional attributes). In such cases, multi-objective optimization can provide the designer with a wider view on these trade-offs and, consequently, can lead to identify suitable refactoring actions that take into account independent or even competing objectives. In this paper, we present an approach that exploits NSGA-II as the genetic algorithm to search optimal Pareto frontiers for software refactoring while considering many objectives. We consider performance and reliability variations of a model alternative with respect to an initial model, the amount of performance antipatterns detected on the model alternative, and the architectural distance, which quantifies the effort to obtain a model alternative from the initial one. We applied our approach on two case studies: a Train Ticket Booking Service, and CoCoME. We observed that our approach is able to improve performance (by up to 42\%) while preserving or even improving the reliability (by up to 32\%) of generated model alternatives. We also observed that there exists an order of preference of refactoring actions among model alternatives. We can state that performance antipatterns confirmed their ability to improve performance of a subject model in the context of many-objective optimization. In addition, the metric that we adopted for the architectural distance seems to be suitable for estimating the refactoring effort.Comment: Accepted for publication in Information and Software Technologies. arXiv admin note: substantial text overlap with arXiv:2107.0612

    A Novel Approach to Automate Complex Software Modularization Using a Fact Extraction System

    Get PDF
    Complex software systems that support organizations are updated regularly, which can erode system architectures. Moreover, documentation is rarely synchronized with the changes to the software system. This creates a slew of issues for future software maintenance. To this goal, information extraction tools use exact approaches to extract entities and their corresponding relationships from source code. Such exact approaches extract all features, including those that are less prominent and may not be significant for modularization. In order to resolve the issue, this work proposes an enhanced approximate information extraction approach, namely, fact extractor system for Java applications (FESJA) that aims to automate software modularization using a fact extraction system. The proposed FESJA technique extracts all the entities along with their corresponding more dominant formal and informal relationships from a Java source code. Results demonstrate the improved performance of FESJA, by extracting 74 (classes), 43 (interfaces), and 31 (enumeration), in comparison with eminent information extraction techniques

    On the relation between architectural smells and source code changes

    Get PDF
    Although architectural smells are one of the most studied type of architectural technical debt, their impact on maintenance effort has not been thoroughly investigated. Studying this impact would help to understand how much technical debt interest is being paid due to the existence of architecture smells and how this interest can be calculated. This work is a first attempt to address this issue by investigating the relation between architecture smells and source code changes. Specifically, we study whether the frequency and size of changes are correlated with the presence of a selected set of architectural smells. We detect architectural smells using the Arcan tool, which detects architectural smells by building a dependency graph of the system analyzed and then looking for the typical structures of the architectural smells. The findings, based on a case study of 31 open-source Java systems, show that 87% of the analyzed commits present more changes in artifacts with at least one smell, and the likelihood of changing increases with the number of smells. Moreover, there is also evidence to confirm that change frequency increases after the introduction of a smell and that the size of changes is also larger in smelly artifacts. These findings hold true especially in Medium–Large and Large artifacts

    A UML Profile for the Design, Quality Assessment and Deployment of Data-intensive Applications

    Get PDF
    Big Data or Data-Intensive applications (DIAs) seek to mine, manipulate, extract or otherwise exploit the potential intelligence hidden behind Big Data. However, several practitioner surveys remark that DIAs potential is still untapped because of very difficult and costly design, quality assessment and continuous refinement. To address the above shortcoming, we propose the use of a UML domain-specific modeling language or profile specifically tailored to support the design, assessment and continuous deployment of DIAs. This article illustrates our DIA-specific profile and outlines its usage in the context of DIA performance engineering and deployment. For DIA performance engineering, we rely on the Apache Hadoop technology, while for DIA deployment, we leverage the TOSCA language. We conclude that the proposed profile offers a powerful language for data-intensive software and systems modeling, quality evaluation and automated deployment of DIAs on private or public clouds

    Class-Level Refactoring Prediction by Ensemble Learning with Various Feature Selection Techniques

    Get PDF
    Background: Refactoring is changing a software system without affecting the software functionality. The current researchers aim i to identify the appropriate method(s) or class(s) that needs to be refactored in object-oriented software. Ensemble learning helps to reduce prediction errors by amalgamating different classifiers and their respective performances over the original feature data. Other motives are added in this paper regarding several ensemble learners, errors, sampling techniques, and feature selection techniques for refactoring prediction at the class level. Objective: This work aims to develop an ensemble-based refactoring prediction model with structural identification of source code metrics using different feature selection techniques and data sampling techniques to distribute the data uniformly. Our model finds the best classifier after achieving fewer errors during refactoring prediction at the class level. Methodology: At first, our proposed model extracts a total of 125 software metrics computed from object-oriented software systems processed for a robust multi-phased feature selection method encompassing Wilcoxon significant text, Pearson correlation test, and principal component analysis (PCA). The proposed multi-phased feature selection method retains the optimal features characterizing inheritance, size, coupling, cohesion, and complexity. After obtaining the optimal set of software metrics, a novel heterogeneous ensemble classifier is developed using techniques such as ANN-Gradient Descent, ANN-Levenberg Marquardt, ANN-GDX, ANN-Radial Basis Function; support vector machine with different kernel functions such as LSSVM-Linear, LSSVM-Polynomial, LSSVM-RBF, Decision Tree algorithm, Logistic Regression algorithm and extreme learning machine (ELM) model are used as the base classifier. In our paper, we have calculated four different errors i.e., Mean Absolute Error (MAE), Mean magnitude of Relative Error (MORE), Root Mean Square Error (RMSE), and Standard Error of Mean (SEM). Result: In our proposed model, the maximum voting ensemble (MVE) achieves better accuracy, recall, precision, and F-measure values (99.76, 99.93, 98.96, 98.44) as compared to the base trained ensemble (BTE) and it experiences less errors (MAE = 0.0057, MORE = 0.0701, RMSE = 0.0068, and SEM = 0.0107) during its implementation to develop the refactoring model. Conclusions: Our experimental result recommends that MVE with upsampling can be implemented to improve the performance of the refactoring prediction model at the class level. Furthermore, the performance of our model with different data sampling techniques and feature selection techniques has been shown in the form boxplot diagram of accuracy, F-measure, precision, recall, and area under the curve (AUC) parameters.publishedVersio
    corecore