25,322 research outputs found

    Activity Based Costing techniques for workload characterization.

    Get PDF
    This paper addresses the problem of non-captured service demands in workload monitoring data. Capture ratios are the coefficients that correct the workload service demands so that they fit the global system monitoring data. This paper proposes new techniques for the determination of capture ratios by means of Activity Based Costing techniques. The techniques are illustrated by means of a case study, which also illustrates the non-trivial nature of capture ratios in practical performance analysis.Activity based costing;

    Performance limitations of bilateral force reflection imposed by operator dynamic characteristics

    Get PDF
    A linearized, single-axis model is presented for bilateral force reflection which facilitates investigation into the effects of manipulator, operator, and task dynamics, as well as time delay and gain scaling. Structural similarities are noted between this model and impedance control. Stability results based upon this model impose requirements upon operator dynamic characteristics as functions of system time delay and environmental stiffness. An experimental characterization reveals the limited capabilities of the human operator to meet these requirements. A procedure is presented for determining the force reflection gain scaling required to provide stability and acceptable operator workload. This procedure is applied to a system with dynamics typical of a space manipulator, and the required gain scaling is presented as a function of environmental stiffness

    REPP-H: runtime estimation of power and performance on heterogeneous data centers

    Get PDF
    Modern data centers increasingly demand improved performance with minimal power consumption. Managing the power and performance requirements of the applications is challenging because these data centers, incidentally or intentionally, have to deal with server architecture heterogeneity [19], [22]. One critical challenge that data centers have to face is how to manage system power and performance given the different application behavior across multiple different architectures.This work has been supported by the EU FP7 program (Mont-Blanc 2, ICT-610402), by the Ministerio de Economia (CAP-VII, TIN2015-65316-P), and the Generalitat de Catalunya (MPEXPAR, 2014-SGR-1051). The material herein is based in part upon work supported by the US NSF, grant numbers ACI-1535232 and CNS-1305220.Peer ReviewedPostprint (author's final draft

    BigDataBench: a Big Data Benchmark Suite from Internet Services

    Full text link
    As architecture, systems, and data management communities pay greater attention to innovative big data systems and architectures, the pressure of benchmarking and evaluating these systems rises. Considering the broad use of big data systems, big data benchmarks must include diversity of data and workloads. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. BigDataBench is publicly available from http://prof.ict.ac.cn/BigDataBench . Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache misses per 1000 instructions of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.Comment: 12 pages, 6 figures, The 20th IEEE International Symposium On High Performance Computer Architecture (HPCA-2014), February 15-19, 2014, Orlando, Florida, US
    • …
    corecore