5 research outputs found
Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study
This is the author’s version of a work that was accepted for publication in The Journal of Systems and Software. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software, 111, 2016. DOI 10.1016/j.jss.2015.08.052.Benchmarks enable the comparison of computer-based systems attending to a variable set of criteria, such as
dependability, security, performance, cost and/or power consumption. It is not despite its difficulty, but rather
its mathematical accuracy that multi-criteria analysis of results remains today a subjective process rarely addressed
in an explicit way in existing benchmarks. It is thus not surprising that industrial benchmarks only
rely on the use of a reduced set of easy-to-understand measures, specially when considering complex systems.
This is a way to keep the process of result interpretation straightforward, unambiguous and accurate.
However, it limits at the same time the richness and depth of the analysis process. As a result, the academia
prefers to characterize complex systems with a wider set of measures. Marrying the requirements of industry
and academia in a single proposal remains a challenge today. This paper addresses this question by reducing
the uncertainty of the analysis process using quality (score-based) models. At measure definition time, these
models make explicit (i) which are the requirements imposed to each type of measure, that may vary from
one context of use to another, and (ii) which is the type, and intensity, of the relation between considered
measures. At measure analysis time, they provide a consistent, straightforward and unambiguous method to
interpret resulting measures. The methodology and its practical use are illustrated through three different
case studies from the dependability benchmarking domain, a domain where various different criteria, including
both performance and dependability, are typically considered during analysis of benchmark results..
Although the proposed approach is limited to dependability benchmarks in this document, its usefulness for
any type of benchmark seems quite evident attending to the general formulation of the provided solution.
© 2015 Elsevier Inc. All rights reserved.This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), ANR French project AMORES (ANR-11-INSE-010), the Intel Doctoral Student Honour Programme 2012, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) from the Universitat Politecnica de Valencia.Friginal López, J.; MartÃnez, M.; De Andrés, D.; Ruiz, J. (2016). Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software. 111:105-118. https://doi.org/10.1016/j.jss.2015.08.052S10511811
Towards integrating multi-criteria analysis techniques in dependability benchmarking
[EN] Increasing integration scales are promoting the development of myriads of new devices
and technologies, such smartphones, ad hoc networks, or field-programmable devices,
among others. The proliferation of such devices, with increasing autonomy and
communication capabilities, is paving the way for a new paradigm known as Internet of
Things, in which computing is ubiquitous and devices autonomously exchange information
and cooperate among them and already existing IT infrastructures to improve people’s
and society’s welfare. This new paradigm leads to huge business opportunities to
manufacturers, application developers, and services providers in very different application
domains, like consumer electronics, transport, or health. Accordingly, and to make the
most of these incipient opportunities, industry relies more than ever on the use and re-use
of commercial off-the-shelf (COTS), developed either in-house or by third parties, to
decrease time-to-market and costs. In this race for hitting the market first, companies are
nowadays concerned with the dependability of both COTS and final products, even for
non-critical applications, as unexpected failures may damage the reputation of the
manufacturer and limit the acceptability of their new products. Therefore, benchmarking
techniques adapted to dependability contexts (dependability benchmarking) are being
deployed in order to assess, compare, and select, i) the best suited COTS, among existing
alternatives, to be integrated into a new product, and ii) the configuration parameters that
gets the best trade-off between performance and dependability. However, although
dependability benchmarking procedures have been defined and applied to a wide set of
application domains, no rigorous and precise decision making process has been
established yet, thus hindering the main goal of these approaches: the fair and accurate
comparison and selection of existing alternatives taking into account both performance
and dependability attributes. Indeed, results extracted from experimentation could be
interpreted in so many different ways, according to the context of use of the system and
the subjectivity of the benchmark analyser, that defining a clear and accurate decision
making process is a must to enable the reproducibility of conclusions. Thus, this master
thesis focuses on how integrating a decision making methodology into the regular
dependability benchmarking procedure. The challenges to be faced include how to deal
with the requirements from industry, just getting a single score characterising a system,
and academia, getting as much measures as possible to accurately characterise the
system, and how to navigate from one representation to another without losing
meaningful information.[ES] El incremento de las escalas de integración están dando lugar a una infinidad de
nuevos dispositivos y tecnologÃas, tales como smartphones, redes ad hoc, y dispositivos
reprogramables entre otros. La proliferación de estos dispositivos con mejoras en
autonomÃa y capacidades de comunicación está allanando el camino a un nuevo
paradigma conocido como Internet of Things (el Internet de las cosas), donde la
computación es ubÃcua y los dispositivos cooperan e intercambian información de forma
autónoma entre ellos, y donde las infrastructuras para las TI mejoran el bienestar de la
gente y de la sociedad. De la mano de este paradigma llegan una gran cantidad de
oportunidades de negocio para fabricantes, desarrolladores de aplicaciones, y provedores
de servicios en areas tan distintas como la electrónica de consumo, el transporte o la
sanidad. De acuerdo con esto, y para sacar el mayor provecho de estas oportunidades, la
industria depende ahora más que nunca de la utilización y reutilización de productos
desarrollados por terceros, que les permiten reducir el tiempo de lanzamiento al mercado
y los costes para sus productos. En esta carrera por ser el primero en llegar al mercado,
las compañias se preocupan de la confiabilidad de tanto los componentes desarrollados
por terceros, como de los propios productos finales, ya que fallos inesperados podrÃan
perjudicar la reputación del fabricante y limitar la aceptación de sus nuevos productos.
Por tanto, las técnicas de evaluación adaptadas al contexto de la confiabilidad se están
desplegando para evaluar, comparar y seleccionar, i) aquellos componentes que mejor se
ajustan para ser integrados en un nuevo producto, y ii) los parámetros de configuración
que ofrecen el mejor equilibrio entre rendimiento y confiabilidad. Sin embargo, aunque los
procesos de evaluación de la confiabilidad se han definido y aplicato a un gran conjunto
de entornos de aplicación, todavÃa no se han establecido procesos precisos y rigurosos
para llevar a cabo el proceso the toma de decisiones, dificultando asà los objetivos de este
tipo de aproximaciones: una comparación y una selección justa de las alternativas
existentes tomando en consideración atributos del rendimiento y de la confiabilidad. De
echo, los resultados extraÃdos de la experimentación se pueden interpretar de muchas
maneras distintas dependiendo del contexto de uso del sistema, y del criterio subjetivo
del evaluador. Por lo que definir un proceso de toma de decisiones claro y conciso es una
tarea obligatoria para permitir la reproducibilidad de las conclusiones. Asà pues, éste
trabajo final de máster se centra en el proceso de integración de una metodologÃa de
toma de decisiones en un proceso de evaluación de la confiabilidad común. Los retos a
afrontar incluyen cómo tratar con los requisitos de la industria, obteniendo una única
medida para caracterizar el sistema, y con los requisitos de los académicos, donde se
prefiere la obtención de cuantas más medidas posibles para caracterizar el sistema, y
como navegar de una representación a la otra sin sufrir una pérdida de información
relevante.MartÃnez Raga, M. (2013). Towards integrating multi-criteria analysis techniques in dependability benchmarking. http://hdl.handle.net/10251/39987Archivo delegad
A software architecture for consensus based replication
Orientador: Luiz Eduardo BuzatoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Esta tese explora uma das ferramentas fundamentais para construção de sistemas distribuÃdos: a replicação de componentes de software. Especificamente, procuramos resolver o problema de como simplificar a construção de aplicações replicadas que combinem alto grau de disponibilidade e desempenho. Como ferramenta principal para alcançar o objetivo deste trabalho de pesquisa desenvolvemos Treplica, uma biblioteca de replicação voltada para construção de aplicações distribuÃdas, porém com semântica de aplicações centralizadas. Treplica apresenta ao programador uma interface simples baseada em uma especificação orientada a objetos de replicação ativa. A conclusão que defendemos nesta tese é que é possÃvel desenvolver um suporte modular e de uso simples para replicação que exibe alto desempenho, baixa latência e que permite recuperação eficiente em caso de falhas. Acreditamos que a arquitetura de software proposta tem aplicabilidade em qualquer sistema distribuÃdo, mas é de especial interesse para sistemas que não são distribuÃdos pela ausência de uma forma simples, eficiente e confiável de replicá-losAbstract: This thesis explores one of the fundamental tools for the construction of distributed systems: the replication of software components. Specifically, we attempted to solve the problem of simplifying the construction of high-performance and high-availability replicated applications. We have developed Treplica, a replication library, as the main tool to reach this research objective. Treplica allows the construction of distributed applications that behave as centralized applications, presenting the programmer a simple interface based on an object-oriented specification for active replication. The conclusion we reach in this thesis is that it is possible to create a modular and simple to use support for replication, providing high performance, low latency and fast recovery in the presence of failures. We believe our proposed software architecture is applicable to any distributed system, but it is particularly interesting to systems that remain centralized due to the lack of a simple, efficient and reliable replication mechanismDoutoradoSistemas de ComputaçãoDoutor em Ciência da Computaçã
Étalonnage de la sûreté de fonctionnement des systèmes d’exploitation – Spécifications et mise en oeuvre
Les développeurs des systèmes informatiques, y compris critiques, font souvent appel à des systèmes d’exploitation sur étagère. Cependant, un mauvais fonctionnement d’un système d’exploitation peut avoir un fort impact sur la sûreté de fonctionnement du système global, d’où la nécessité de trouver des moyens efficaces pour caractériser sa sûreté de fonctionnement. Dans cette thèse, nous étudions l’étalonnage de la sûreté de fonctionnement des systèmes d’exploitation par rapport aux comportements défectueux de l’application. Nous spécifions les propriétés qu’un étalon de sûreté de fonctionnement doit satisfaire. Après, nous spécifions les mesures et la mise en oeuvre des trois étalons destinés à comparer la sûreté de fonctionnement de différents systèmes d’exploitation. Ensuite, nous développons les prototypes des trois étalons. Ces prototypes servent à comparer les différents systèmes d’exploitation des familles Windows et Linux, et pour montrer la satisfaction des propriétés identifiées. ABSTRACT : System developers are increasingly resorting to off-the-shelf operating systems, even in critical application domains. Any malfunction of the operating system may have a strong impact on the dependability of the global system. Therefore, it is important to make available information about the operating systems dependability. In our work, we aim to specify dependability benchmarks to characterize the operating systems with respect to the faulty behavior of the application. We specify three benchmarks intended for comparing the dependability of operating systems belonging to different families. We specify the set of measures and the procedures to be followed after defining the set of properties that a dependability benchmark should satisfy. After, we present implemented prototypes of these benchmarks. They are used to compare the dependability of operating systems belonging to Windows and Linux, and to show that our benchmarks satisfy the identified properties