1,678,492 research outputs found

    Software model refactoring based on performance analysis: better working on software or performance side?

    Full text link
    Several approaches have been introduced in the last few years to tackle the problem of interpreting model-based performance analysis results and translating them into architectural feedback. Typically the interpretation can take place by browsing either the software model or the performance model. In this paper, we compare two approaches that we have recently introduced for this goal: one based on the detection and solution of performance antipatterns, and another one based on bidirectional model transformations between software and performance models. We apply both approaches to the same example in order to illustrate the differences in the obtained performance results. Thereafter, we raise the level of abstraction and we discuss the pros and cons of working on the software side and on the performance side.Comment: In Proceedings FESCA 2013, arXiv:1302.478

    Information Technology of Software Architecture Structural Synthesis of Information System

    Get PDF
    Information technology of information system software architecture structural synthesis is proposed. It is used for evolutionary models of the software lifecycle, which provides configuration and formation of software to control the realization and recovery of computing processes in parallel and distributed computing resources structures. The technology is applied in the framework of the software requirements analysis, design of architecture, design and integration of software. Method of combining vertices for multilevel graph model of software architecture and automata-based method of checking performance limitations to software are based on the advanced graph model of software architecture. These methods are proposed in the framework of information technology and allow forming a rational structure of the program, as well as checking for compliance with the functional and non-functional requirements of the end user.The essence of proposed information technology is in displaying of the customer's requirements in the current version of the graph model of program complex structure and providing a reconfiguration of the system modules. This process is based on the analysis and processing of the graph model, software module specifications, formation of software structure in accordance with the graph model, software verification and its compilation

    Performance Analysis of Improved Component based Software Reliability Model

    Get PDF
    Software reliability engineering techniques focus on development and maintenance of software systems. This paper presents a improved component model. The model is used to estimate the reliability of software systems and the usage ration of each component. A component impact analysis which helps in focusing of testing is presented .The proposed method exhibits considerable improvement when compared against conventional methods

    Analytic Performance Modeling and Analysis of Detailed Neuron Simulations

    Full text link
    Big science initiatives are trying to reconstruct and model the brain by attempting to simulate brain tissue at larger scales and with increasingly more biological detail than previously thought possible. The exponential growth of parallel computer performance has been supporting these developments, and at the same time maintainers of neuroscientific simulation code have strived to optimally and efficiently exploit new hardware features. Current state of the art software for the simulation of biological networks has so far been developed using performance engineering practices, but a thorough analysis and modeling of the computational and performance characteristics, especially in the case of morphologically detailed neuron simulations, is lacking. Other computational sciences have successfully used analytic performance engineering and modeling methods to gain insight on the computational properties of simulation kernels, aid developers in performance optimizations and eventually drive co-design efforts, but to our knowledge a model-based performance analysis of neuron simulations has not yet been conducted. We present a detailed study of the shared-memory performance of morphologically detailed neuron simulations based on the Execution-Cache-Memory (ECM) performance model. We demonstrate that this model can deliver accurate predictions of the runtime of almost all the kernels that constitute the neuron models under investigation. The gained insight is used to identify the main governing mechanisms underlying performance bottlenecks in the simulation. The implications of this analysis on the optimization of neural simulation software and eventually co-design of future hardware architectures are discussed. In this sense, our work represents a valuable conceptual and quantitative contribution to understanding the performance properties of biological networks simulations.Comment: 18 pages, 6 figures, 15 table

    Kerncraft: A Tool for Analytic Performance Modeling of Loop Kernels

    Full text link
    Achieving optimal program performance requires deep insight into the interaction between hardware and software. For software developers without an in-depth background in computer architecture, understanding and fully utilizing modern architectures is close to impossible. Analytic loop performance modeling is a useful way to understand the relevant bottlenecks of code execution based on simple machine models. The Roofline Model and the Execution-Cache-Memory (ECM) model are proven approaches to performance modeling of loop nests. In comparison to the Roofline model, the ECM model can also describes the single-core performance and saturation behavior on a multicore chip. We give an introduction to the Roofline and ECM models, and to stencil performance modeling using layer conditions (LC). We then present Kerncraft, a tool that can automatically construct Roofline and ECM models for loop nests by performing the required code, data transfer, and LC analysis. The layer condition analysis allows to predict optimal spatial blocking factors for loop nests. Together with the models it enables an ab-initio estimate of the potential benefits of loop blocking optimizations and of useful block sizes. In cases where LC analysis is not easily possible, Kerncraft supports a cache simulator as a fallback option. Using a 25-point long-range stencil we demonstrate the usefulness and predictive power of the Kerncraft tool.Comment: 22 pages, 5 figure

    Long-Term Average Cost in Featured Transition Systems

    Get PDF
    A software product line is a family of software products that share a common set of mandatory features and whose individual products are differentiated by their variable (optional or alternative) features. Family-based analysis of software product lines takes as input a single model of a complete product line and analyzes all its products at the same time. As the number of products in a software product line may be large, this is generally preferable to analyzing each product on its own. Family-based analysis, however, requires that standard algorithms be adapted to accomodate variability. In this paper we adapt the standard algorithm for computing limit average cost of a weighted transition system to software product lines. Limit average is a useful and popular measure for the long-term average behavior of a quality attribute such as performance or energy consumption, but has hitherto not been available for family-based analysis of software product lines. Our algorithm operates on weighted featured transition systems, at a symbolic level, and computes limit average cost for all products in a software product line at the same time. We have implemented the algorithm and evaluated it on several examples

    ATAMM analysis tool

    Get PDF
    Diagnostics software for analyzing Algorithm to Architecture Mapping Model (ATAMM) based concurrent processing systems is presented. ATAMM is capable of modeling the execution of large grain algorithms on distributed data flow architectures. The tool graphically displays algorithm activities and processor activities for evaluation of the behavior and performance of an ATAMM based system. The tool's measurement capabilities indicate computing speed, throughput, concurrency, resource utilization, and overhead. Evaluations are performed on a simulated system using the software tool. The tool is used to estimate theoretical lower bound performance. Analysis results are shown to be comparable to the predictions

    BioNetGen 2.2: Advances in Rule-Based Modeling

    Full text link
    BioNetGen is an open-source software package for rule-based modeling of complex biochemical systems. Version 2.2 of the software introduces numerous new features for both model specification and simulation. Here, we report on these additions, discussing how they facilitate the construction, simulation, and analysis of larger and more complex models than previously possible.Comment: 3 pages, 1 figure, 1 supplementary text file. Supplementary text includes a brief discussion of the RK-PLA along with a performance analysis, two tables listing all new actions/arguments added in BioNetGen 2.2, and the "BioNetGen Quick Reference Guide". Accepted for publication in Bioinformatic
    • …
    corecore