349,037 research outputs found

    The Locus Algorithm IV: Performance metrics of a grid computing system used to create catalogues of optimised pointings

    Get PDF
    This paper discusses the requirements for and performance metrics of the the Grid Computing system used to implement the Locus Algorithm to identify optimum pointings for differential photometry of 61,662,376 stars and 23,779 quasars. Initial operational tests indicated a need for a software system to analyse the data and a High Performance Computing system to run that software in a scalable manner. Practical assessments of the performance of the software in a serial computing environment were used to provide a benchmark against which the performance metrics of the HPC solution could be compared, as well as to indicate any bottlenecks in performance. These performance metrics indicated a distinct split in the performance dictated more by differences in the input data than by differences in the design of the systems used. This indicates a need for experimental analysis of system performance, and suggests that algorithmic complexity analyses may lead to incorrect or naive conclusions, especially in systems with high data I/O overhead such as grid computing. Further, it implies that systems which reduce or eliminate this bottleneck such as in-memory processing could lead to a substantial increase in performance

    Model-driven performance evaluation for service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Software quality aspects such as performance are of central importance for the integration of heterogeneous, distributed service-based systems. Empirical performance evaluation is a process of measuring and calculating performance metrics of the implemented software. We present an approach for the empirical, model-based performance evaluation of services and service compositions in the context of model-driven service engineering. Temporal databases theory is utilised for the empirical performance evaluation of model-driven developed service systems

    Monitoring Networked Applications With Incremental Quantile Estimation

    Full text link
    Networked applications have software components that reside on different computers. Email, for example, has database, processing, and user interface components that can be distributed across a network and shared by users in different locations or work groups. End-to-end performance and reliability metrics describe the software quality experienced by these groups of users, taking into account all the software components in the pipeline. Each user produces only some of the data needed to understand the quality of the application for the group, so group performance metrics are obtained by combining summary statistics that each end computer periodically (and automatically) sends to a central server. The group quality metrics usually focus on medians and tail quantiles rather than on averages. Distributed quantile estimation is challenging, though, especially when passing large amounts of data around the network solely to compute quality metrics is undesirable. This paper describes an Incremental Quantile (IQ) estimation method that is designed for performance monitoring at arbitrary levels of network aggregation and time resolution when only a limited amount of data can be transferred. Applications to both real and simulated data are provided.Comment: This paper commented in: [arXiv:0708.0317], [arXiv:0708.0336], [arXiv:0708.0338]. Rejoinder in [arXiv:0708.0339]. Published at http://dx.doi.org/10.1214/088342306000000583 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Performance measurement of IT service management: a case study of an Australian university

    Get PDF
    IT departments are adopting service orientation by implementing IT service management (ITSM) frameworks. Most organisations are hesitant to discuss their ITSM performance measurement practices, tending to focus more on challenges. However there are good practices that are found amidst the challenges. We present a case study that provides an account of the performance measurement practices in the ICT Division of an Australian university. This case study was conducted with the aim of understanding the internal and external factors that influence the selection of ITSM performance metrics. It also explores how and why metrics and frameworks are used to measure the performance of ITSM in organisations. Interviews were conducted to identify the specific ITSM performance metrics used and how they were derived. It was found that a number of factors internal and external to the organisation influenced the selection of the performance metrics. The internal factors include meeting the need for improved governance, alignment of IT strategy with organisation strategy, having a mechanism to provide feedback to IT customers (university staff and students). External factors include benchmarking against others in the same industry and the choice of metrics offered by ITSM software tool adopted

    Memory and Parallelism Analysis Using a Platform-Independent Approach

    Full text link
    Emerging computing architectures such as near-memory computing (NMC) promise improved performance for applications by reducing the data movement between CPU and memory. However, detecting such applications is not a trivial task. In this ongoing work, we extend the state-of-the-art platform-independent software analysis tool with NMC related metrics such as memory entropy, spatial locality, data-level, and basic-block-level parallelism. These metrics help to identify the applications more suitable for NMC architectures.Comment: 22nd ACM International Workshop on Software and Compilers for Embedded Systems (SCOPES '19), May 201

    Quality Research by Using Performance Evaluation Metrics for Software Systems and Components

    Get PDF
    Software performance and evaluation have four basic needs: (1) well-defined performance testing strategy, requirements, and focuses, (2) correct and effective performance evaluation models, (3) well-defined performance metrics, and (4) cost-effective performance testing and evaluation tools and techniques. This chapter first introduced a performance test process and discusses the performance testing objectives and focus areas. Then, it summarized the basic challenges and issues on performance testing and evaluation of component based programs and components. Next, this chapter presented different types of performance metrics for software components and systems, including processing speed, utilization, throughput, reliability, availability, and scalability metrics. Most of the performance metrics covered here can be considered as the application of existing metrics to software components. New performance metrics are needed to support the performance evaluation of component based programs.metrics, software performance, testing, evaluation, reliability, scalability
    corecore