6 research outputs found

    A Review on Software Performance Analysis for Early Detection of Latent Faults in Design Models

    Get PDF
    Organizations and society could face major breakdown if IT strategies do not comply with performance requirements. This is more so in the era of globalization and emergence of technologies caused more issues. Software design models might have latent and potential issues that affect performance of software. Often performance is the neglected area in the industry. Identifying performance issues in the design phase can save time, money and effort. Software engineers need to know the performance requirements so as to ensure quality software to be developed. Software performance engineering a quantitative approach for building software systems that can meet performance requirements. There are many design models based on UML, Petri Nets and Product-Forms. These models can be used to derive performance models that make use of LQN, MSC, QNM and so on. The design models are to be mapped to performance models in order to predict performance of system early and render valuable feedback for improving quality of the system. Due to emerging distributed technologies such as EJB, CORBA, DCOM and SOA applications became very complex with collaboration with other software. The component based software systems, software systems that are embedded, distributed likely need more systematic performance models that can leverage the quality of such systems. Towards this end many techniques came into existence. This paper throws light into software performance analysis and its present state-of-the-art. It reviews different design models and performance models that provide valuable insights to make well informed decisions

    Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)

    Get PDF
    In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis

    Learning Very Large Configuration Spaces: What Matters for Linux Kernel Sizes

    Get PDF
    Linux kernels are used in a wide variety of appliances, many of them having strong requirements on the kernel size due to constraints such as limited memory or instant boot. With more than ten thousands of configuration options to choose from, obtaining a suitable trade off between kernel size and functionality is an extremely hard problem. Developers, contributors, and users actually spend significant effort to document, understand, and eventually tune (combinations of) options for meeting a kernel size. In this paper, we investigate how machine learning can help explain what matters for predicting a given Linux kernel size. Unveiling what matters in such very large configuration space is challenging for two reasons: (1) whatever the time we spend on it, we can only build and measure a tiny fraction of possible kernel configurations; (2) the prediction model should be both accurate and interpretable. We compare different machine learning algorithms and demonstrate the benefits of specific feature encoding and selection methods to learn an accurate model that is fast to compute and simple to interpret. Our results are validated over 95,854 kernel configurations and show that we can achieve low prediction errors over a reduced set of options. We also show that we can extract interpretable information for refining documentation and experts' knowledge of Linux, or even assigning more sensible default values to options

    Performance-based selection of software and hardware features under parameter uncertainty

    No full text
    corecore