8,048 research outputs found
On a Catalogue of Metrics for Evaluating Commercial Cloud Services
Given the continually increasing amount of commercial Cloud services in the
market, evaluation of different services plays a significant role in
cost-benefit analysis or decision making for choosing Cloud Computing. In
particular, employing suitable metrics is essential in evaluation
implementations. However, to the best of our knowledge, there is not any
systematic discussion about metrics for evaluating Cloud services. By using the
method of Systematic Literature Review (SLR), we have collected the de facto
metrics adopted in the existing Cloud services evaluation work. The collected
metrics were arranged following different Cloud service features to be
evaluated, which essentially constructed an evaluation metrics catalogue, as
shown in this paper. This metrics catalogue can be used to facilitate the
future practice and research in the area of Cloud services evaluation.
Moreover, considering metrics selection is a prerequisite of benchmark
selection in evaluation implementations, this work also supplements the
existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference
on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23,
201
A review of High Performance Computing foundations for scientists
The increase of existing computational capabilities has made simulation
emerge as a third discipline of Science, lying midway between experimental and
purely theoretical branches [1, 2]. Simulation enables the evaluation of
quantities which otherwise would not be accessible, helps to improve
experiments and provides new insights on systems which are analysed [3-6].
Knowing the fundamentals of computation can be very useful for scientists, for
it can help them to improve the performance of their theoretical models and
simulations. This review includes some technical essentials that can be useful
to this end, and it is devised as a complement for researchers whose education
is focused on scientific issues and not on technological respects. In this
document we attempt to discuss the fundamentals of High Performance Computing
(HPC) [7] in a way which is easy to understand without much previous
background. We sketch the way standard computers and supercomputers work, as
well as discuss distributed computing and discuss essential aspects to take
into account when running scientific calculations in computers.Comment: 33 page
A statistical system management method to tackle data uncertainty when using key performance indicators of the balanced scorecard
[EN] This work is focused on the development of a graphical method using statistical non-parametric tests for randomness and parametric tests to detect significant trends and shifts in key performance indicators from balanced scorecards. It provides managers and executives with a tool to determine if processes are improving or decaying.
The method tackles the hitherto unresolved problem of data uncertainty due to sample size for key performance indicators on scorecards. The method has been developed and applied in a multinational manufacturing company using scorecard data from two complete years as a case study approach to test validity and effectiveness.Sánchez-Márquez, R.; Albarracín Guillem, JM.; Vicens Salort, E.; Jabaloyes Vivas, JM. (2018). A statistical system management method to tackle data uncertainty when using key performance indicators of the balanced scorecard. Journal of Manufacturing Systems. 48:166-179. https://doi.org/10.1016/j.jmsy.2018.07.010S1661794
Recommended from our members
System Performance Displayer : a performance monitoring tool
As UNIX operating system's popularity increases, so arises the greater need for performance data gathering and resources management of the systems. Contrary to the expectation of UNIX users, however, there is only a limited set of tools available, and these tools are difficult to correlate data from different sources and primitive for their data presentation [1].
This research aims to build a displayer, with current software technology, that not only interprets the correlated data, but also emphasizes the visual effect. Data are presented by the combination of color, sound, graphics and animation to give user a whole picture of the system utilization, as well as detailed statistics of each device.
The result of this research is a software tool, System Performance Displayer (SPD). With the power of workstations, SPD provides a much easier way than the traditional tools to interpret and display system performance data
Reliability measurement during software development
During the development of data base software for a multi-sensor tracking system, reliability was measured. The failure ratio and failure rate were found to be consistent measures. Trend lines were established from these measurements that provided good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined
- …