24 research outputs found
psort
psort \ue8 stato il pi\uf9 veloce software di ordinamento per macchine di classe PC dal 2008 al 2011 (benchmark Pennysort, http://sortbenchmark.org) e un suo adattamento per cluster ha migliorato il record per il benchmark datamation di quasi un ordine di grandezza nel 2011. Il rapporto tecnico ufficiale si trova sul sito sortbenchmark.org (che cataloga i pi\uf9 efficienti software di ordinamento per varie categorie di task/hardware - originariamente mantenuto dal premio Turing Jim Gray) all'URL http://sortbenchmark.org/psort_2011.pdf -- Ulteriori dettagli si possono trovare nelle pubblicazioni:
P. Bertasi, M. Bressan, E. Peserico. psort, yet another fast stable sorting software, ACM Journal of Experimental Algorithmics, vol. 16, 2011 --
P. Bertasi, M. Bonazza, M. Bressan, E. Peserico. Datamation: a quarter of a century and four orders of magnitude later. Proc. of IEEE CLUSTER 201
Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study
This is the authorâs version of a work that was accepted for publication in The Journal of Systems and Software. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software, 111, 2016. DOI 10.1016/j.jss.2015.08.052.Benchmarks enable the comparison of computer-based systems attending to a variable set of criteria, such as
dependability, security, performance, cost and/or power consumption. It is not despite its difficulty, but rather
its mathematical accuracy that multi-criteria analysis of results remains today a subjective process rarely addressed
in an explicit way in existing benchmarks. It is thus not surprising that industrial benchmarks only
rely on the use of a reduced set of easy-to-understand measures, specially when considering complex systems.
This is a way to keep the process of result interpretation straightforward, unambiguous and accurate.
However, it limits at the same time the richness and depth of the analysis process. As a result, the academia
prefers to characterize complex systems with a wider set of measures. Marrying the requirements of industry
and academia in a single proposal remains a challenge today. This paper addresses this question by reducing
the uncertainty of the analysis process using quality (score-based) models. At measure definition time, these
models make explicit (i) which are the requirements imposed to each type of measure, that may vary from
one context of use to another, and (ii) which is the type, and intensity, of the relation between considered
measures. At measure analysis time, they provide a consistent, straightforward and unambiguous method to
interpret resulting measures. The methodology and its practical use are illustrated through three different
case studies from the dependability benchmarking domain, a domain where various different criteria, including
both performance and dependability, are typically considered during analysis of benchmark results..
Although the proposed approach is limited to dependability benchmarks in this document, its usefulness for
any type of benchmark seems quite evident attending to the general formulation of the provided solution.
© 2015 Elsevier Inc. All rights reserved.This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), ANR French project AMORES (ANR-11-INSE-010), the Intel Doctoral Student Honour Programme 2012, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) from the Universitat Politecnica de Valencia.Friginal LĂłpez, J.; MartĂnez, M.; De AndrĂ©s, D.; Ruiz, J. (2016). Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software. 111:105-118. https://doi.org/10.1016/j.jss.2015.08.052S10511811
From A to E: Analyzing TPCâs OLTP Benchmarks -- The obsolete, the ubiquitous, the unexplored
Introduced in 2007, TPC-E is the most recently standardized OLTP benchmark by TPC. Even though TPC-E has already been around for six years, it has not gained the popularity of its predecessor TPC-C: all the published results for TPC-E use a single database vendorâs product. TPC-E is significantly different than its predecessors. Some of its distinguishing characteristics are the non-uniform input creation, longer-running and more complicated transactions, more difficult partitioning etc. These factors slow down the adoption of TPC-E. In turn, there is little knowledge in the community about how TPC-E behaves micro-architecturally and within the database engine. To shed light on TPC-E, we implement it on top of a scalable open-source database engine, Shore-MT, and perform a workload characterization study, comparing it with the previous, much better known OLTP benchmarks of TPC: TPC-B and TPC-C. In parallel, we study the evolution of the OLTP benchmarks throughout the decades. Our results demonstrate that TPC-E exhibits similar micro-architectural behavior to TPC-B and TPC-C, even though it incurs less stall time and higher instructions per cycle. On the other hand, within the database engine it suffers more from logical lock contention. Therefore, we argue that, on the hardware side, TPC-E needs less aggressive processors. Whereas on the software side it can benefit from designs based on intra-transaction parallelism, logical partitioning, and optimistic concurrency control to minimize the effects of lock contention without introducing distributed transactions
The LDBC social network benchmark: Business intelligence workload
The Social Network Benchmarkâs Business Intelligence workload (SNB BI) is a comprehensive graph OLAP benchmark targeting analytical data systems capable of supporting graph workloads. This paper marks the finalization of almost a decade of research in academia and industry via the Linked Data Benchmark Council (LDBC). SNB BI advances the state-of-the art in synthetic and scalable analytical database benchmarks in many aspects. Its base is a sophisticated data generator, implemented on a scalable distributed infrastructure, that produces a social graph with small-world phenomena, whose value properties follow skewed and correlated distributions and where values correlate with structure. This is a temporal graph where all nodes and edges follow lifespan-based rules with temporal skew enabling realistic and consistent temporal inserts and (recursive) deletes. The query workload exploiting this skew and correlation is based on LDBCâs âchoke pointâ-driven design methodology and will entice technical and scientific improvements in future (graph) database systems. SNB BI includes the first adoption of âparameter curationâ in an analytical benchmark, a technique that ensures stable runtimes of query variants across different parameter values. Two performance metrics characterize peak single-query performance (power) and sustained concurrent query throughput. To demonstrate the portability of the benchmark, we present experimental results on a relational and a graph DBMS. Note that these do not constitute an official LDBC Benchmark Result â only audited results can use this trademarked term