14 research outputs found

    Implementing a flexible failure detector that expresses the confidence in the system

    Get PDF
    International audienceTraditional unreliable failure detectors are per process oracles that provide a list of nodes suspected of having failed. Previously, we introduced the Impact failure detector that outputs a trust level value which is the degree of confidence in the system. An impact factor is assigned to each node and the trust level is equal to the sum of the impact factors of the nodes not suspected to have failed. An input threshold parameter definesan impact factor limit value, over which the confidence degree on the system is ensured. The impact factor indicates the relative importance of the process in the set S, while the threshold offers a degree of flexibility for failures and false suspicions.We propose in this article two different algorithms, based on query-response message rounds, that implement the Impact FD whose conceptions were tailored to satisfy the Impact FD’s flexibility. The first one exploits the time-free message pattern approach while the second one considers a set of bounded timely responses. We also introduced the concept that a process can be PS−accessible (or ♦PS−accessible) which guarantees that the system S will always (or eventually always) be trusted to this process as well as two properties, P R(IT ) and PR(♦IT ), that characterize the minimum necessary stability condition of S that ensures confidence (or eventual confidence) on it. In both implementations, if the process that monitors S is P S−accessible or ♦PS−accessible, at every query round, it only waits (or eventually only waits) for a set of responsethat satisfy the threshold. A crucial facet of this set of processes is that it is not fixed, i.e., the set of processes can change at each round, which is in accordance with the flexibility capacity of the Impact FD

    Um método e uma ferramenta para testes baseados em modelos para linhas de produto software

    Get PDF
    Orientador: Eliane MartinsDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: As linhas de produtos de software (LPS) estão ganhando interesse devido à crescente demanda por produtos personalizáveis. Tal se deve, em parte, por que as LPS são um meio eficiente e efetivo de entregar produtos com maior qualidade a um custo menor. Em uma LPS, produtos têm requisitos em comum e também, características específicas a cada um. Testar se um produto implementa os requisitos comuns e específicos é um importante passo para garantir uma boa qualidade. No entanto, o teste de uma LPS é uma tarefa complexa, uma vez que a variedade de produtos que podem ser derivados a partir da combinação de características comuns e específicas é enorme. Mesmo que se escolha apenas alguns produtos selecionados, o esforço para testá-los ainda assim é grande, dado que os produtos variam em termos das características específicas selecionadas. Portanto, reutilizar casos de teste de um produto para o outro para determinar se satisfazem os requisitos funcionais, pode não ser possível. Os testes baseados em modelos (MBT) podem ser úteis neste caso, nos quais um modelo de comportamento pode ser obtido a partir dos requisitos e este modelo pode ser usado para a geração automática de casos de teste. Neste trabalho é apresentada uma abordagem em que os requisitos SPL são centrados em casos de uso. Casos de uso (UC) são um formato popular para representar os requisitos. A partir das descrições de casos de uso escritas em um formato semi-estruturado e contendo a especificação de variabilidade, os modelos de comportamento são gerados automaticamente para um produto sob teste, na forma de um modelo de máquina de estado. Construir uma máquina de estado não é trivial para a maioria dos profissionais, que estão mais habituados com descrições textuais e informais dos requisitos. Em geral, a criação manual de modelos de máquinas de estado a partir de UCs pode ser demorado e propenso a erros. O objetivo é fornecer aos engenheiros de teste um método que os guie na criação dos artefatos necessários para que uma versão preliminar de um modelo de estado seja extraída automaticamente dos requisitos. Este modelo preliminar pode ser refinado para tornar-se adequado para uma ferramenta de geração de casos de teste. Para esse processo de refinamento também são fornecidas algumas diretrizes. Como prova de conceito, desenvolveu-se um protótipo de uma ferramenta, MARITACA, que utiliza técnicas de processamento de língua natural para extrair as máquinas de estado a partir das descrições dos casos de uso. O texto apresenta o uso do método e da ferramenta em um exemplo ilustrativo, obtido da literatura, e em uma família de aplicações distribuídas tolerantes a falhas. Este estudo mostrou a aplicabilidade do método proposto. Uma das preocupações nos testes de SPL é a geração de casos de teste redundantes de um produto para outro. Os resultados, embora preliminares, mostraram que a maioria dos casos de teste gerados para um novo produto não são redundantes, pois envolvem características específicas de cada produtoAbstract: Software product lines (SPL) are gaining interest because of the increasing demand for customizable products. This is partly because SPLs are an efficient and effective means of delivering products with higher quality at a lower cost. In SPL, products have common requirements and also, specific features for each one. Testing whether a product implements common and specific requirements is an important step to ensure good quality of the derived products. However, testing a SPL is a complex task, since the variety of products that can be derived from the combination of common and specific features is huge. Even if only a few specific products are selected, the effort to test them is still significant, since the products vary in terms of the specific features that are selected. Therefore, reusing test cases from one product to another to determine whether they satisfy the functional requirements may not be possible. Model-based testing (MBT) may be useful in this case, in which a behavior model can be obtained from the requirements and this model can be used for automatic test cases generation. This work presents model-based product testing approach (MBPTA) for software product lines, in which requirements are centered on use cases. Use Cases (UC) are a popular format for representing requirements. From the use case descriptions written in the form of a semi-structured format and containing the variability specification, the behavior models are automatically generated for a product under test, in the form of a state machine model. Building a state machine is not a trivial task for most practitioners, who are more familiarized with textual and informal descriptions of requirements. In general, the manual creation of state machine models from UCs can be time-consuming and prone to errors. The goal is to provide the test engineers with a method that guides them in the creation of artifacts necessary to extract a preliminary version of a state model from the requirements. This preliminary model can be refined to become suitable for a test case generation tool. MBPTA also provides guidelines for the refinement process of the preliminary model. As proof of concept, a prototype of a tool was developed, MARITACA, which uses natural language processing techniques to extract state machines from the use case descriptions. The text presents the use of the method and the tool in an illustrative example, obtained from the literature, and in a family of distributed fault-tolerant applications. This study showed the applicability of the proposed method. One of the concerns in SPL testing is the generation of redundant test cases from one product to another. The results, though preliminary, showed that most of the test cases generated for a new product are not redundant because they involve specific features of each productMestradoCiência da ComputaçãoMestra em Ciência da ComputaçãoCAPE

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Tolerância a falhas em sistemas MPI com grupos dinâmicos de processos recomendados e registro de mensagens distribuído baseado em paxos

    Get PDF
    Orientador : Prof. Dr. Elias P. Duarte Jr.Tese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 11/05/2017Inclui referências : f. 93-103Área de concentração : Ciência da computaçãoResumo: Os sistemas HPC (High-Performance Computing) são geralmente empregados para executar aplicações de longa duração, incluindo, por exemplo, simulações científicas e industriais complexas. Construir sistemas HPC tolerante a falhas permanece um desafio à medida que o tamanho desses sistemas aumenta. Esta tese de doutorado apresenta duas estratégias de tolerância a falhas para sistemas HPC baseados em MPI. A primeira contribuição apresenta uma solução para lidar com a variabilidade de desempenho que afeta negativamente ou inviabiliza a execução das aplicações HPC. Este é o caso dos clusters compartilhados onde um nodo computacional pode se tornar muito lento e comprometer a execução de toda a aplicação. Esta tese propõe um novo modelo de diagnóstico em nível de sistema onde os processos executam testes entre si a fim de determinar se são recomendados ou não-recomendados. Os processos classificados como recomendados formam um grupo dinâmico, chamado de DGRP (Dynamic Group of Recommended Processes), e são responsáveis por executar a aplicação. Os processos testados como não-recomendados são removidos do DGRP. Um processo pode reingressar ao DGRP após uma rodada de consenso executada pelos processos do DGRP. O modelo foi implementado e empregado para monitorar os processos em um cluster compartilhado multiusuário. No estudo de caso apresentado, os processos do DGRP executam o algoritmo de ordenação paralela Hyperquicksort. O Hyperquicksort é implementado e adaptado para se reconfigurar em tempo de execução a fim de suportar até n ?? 1 processos não-recomendados (em um sistema com n processos). Os resultados obtidos demonstram a sua eficiência. A segunda contribuição desta tese se insere na técnica de rollback-recovery na sua variante chamada de registro de mensagens. O registro de mensagens não requer a sincronização dos processos para salvar o estado da aplicação e evita que todos os processos reiniciem a partir do último estado salvo. No entanto, a maioria dos protocolos de registro de mensagens conta com um componente centralizado e que não tolera falhas, chamado de event logger, para armazenar as informações de recuperação, isto é, os determinantes. Esta tese de doutorado propõe o primeiro event logger distribuído e tolerante a falhas para os protocolos de registro de mensagens. Duas implementações baseadas no algoritmo de consenso Paxos, chamadas de Paxos Clássico e Paxos Paralelo, foram realizadas para o event logger. Um protocolo pessimista de registro de mensagens é construído e implementado para interagir com o event logger proposto e realizar a recuperação automática das aplicações MPI. O desempenho dos event loggers é avaliado perante a aplicação AMG (Algebraic MultiGrid) e as aplicações do NAS Parallel benchmark. A recuperação é avaliada através do algoritmo paralelo de Gusfield e a aplicação AMG. Resultados demonstram que o event logger baseado em Paxos Paralelo tem desempenho comparável ou superior ao da abordagem centralizada e que o protocolo proposto realiza a recuperação da aplicação eficientemente. Palavras-chave: Tolerância a Falhas em MPI, DGRP, Registro de Mensagens, Paxos Paralelo.Abstract: HPC systems are employed to execute long-running applications including, for example, complex industrial and scientific simulations. Building robust, fault-tolerant HPC systems remains a challenge as the size of the system grows. This doctoral thesis presents two faulttolerant strategies for HPC systems based on MPI. Our first contribution presents a solution to deal with the performance variation of HPC system processes that negatively a_ect or even prevent the execution of HPC applications. This is the case in shared clusters in which a single node can become too slow and can thus compromise the entire application execution. This thesis proposes a new system-level diagnosis model in which processes execute tests among themselves in order to determine whether they are recommended or non-recommended. Processes classified as recommended form a Dynamic Group of Recommended Processes (DGRP), which is responsible for running the application. A process can rejoin the DGRP after a round of consensus executed by the DGRP processes. The model was implemented and used to monitor processes in a shared multi-user cluster. In the case study presented, the DGRP processes execute the parallel sorting algorithm Hyperquicksort. Hyperquicksort is implemented and adapted to reconfigure itself at runtime in order to proceed even if up to N ?? 1 processes become non-recommended (N is the total number of processes). Results are presented showing that the strategy is e_cient. The second contribution of this thesis is in the field of the rollbackrecovery technique in its variant based on message logging. Message logging does not require all processes to coordinate in order to save their states during normal execution. Neither does it require to restart all processes from the last saved states after a single process fails. However, most existing message logging protocols rely on a centralized entity which does not tolerate failures, called event logger, which stores recovery information called determinants. This thesis proposes, to the best of our knowledge, the first distributed and fault-tolerant event logger. Two implementations are presented based on the Paxos consensus algorithm, called Classic Paxos and Parallel Paxos. A pessimistic message logging protocol is built and implemented based on the proposed event logger to perform automatic recovery of MPI applications after failures. We evaluate the performance of the event logger using both the AMG (Algebraic MultiGrid) application and NAS Parallel benchmark applications. Application recovery is evaluated in two case studies based on Gusfield's parallel cut tree algorithm and the AMG application. Results show that the event logger based on Parallel Paxos performs as well as or better than a centralized event logger and that the proposed recovery protocol is also e_cient. Keywords: Fault Tolerance in MPI, DGRP, Message Logging, Parallel Paxos

    POSTER SESSIONS

    Get PDF

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    History of Construction Cultures Volume 2

    Get PDF
    Volume 2 of History of Construction Cultures contains papers presented at the 7ICCH – Seventh International Congress on Construction History, held at the Lisbon School of Architecture, Portugal, from 12 to 16 July, 2021. The conference has been organized by the Lisbon School of Architecture (FAUL), NOVA School of Social Sciences and Humanities, the Portuguese Society for Construction History Studies and the University of the Azores. The contributions cover the wide interdisciplinary spectrum of Construction History and consist on the most recent advances in theory and practical case studies analysis, following themes such as: - epistemological issues; - building actors; - building materials; - building machines, tools and equipment; - construction processes; - building services and techniques ; -structural theory and analysis ; - political, social and economic aspects; - knowledge transfer and cultural translation of construction cultures. Furthermore, papers presented at thematic sessions aim at covering important problematics, historical periods and different regions of the globe, opening new directions for Construction History research. We are what we build and how we build; thus, the study of Construction History is now more than ever at the centre of current debates as to the shape of a sustainable future for humankind. Therefore, History of Construction Cultures is a critical and indispensable work to expand our understanding of the ways in which everyday building activities have been perceived and experienced in different cultures, from ancient times to our century and all over the world
    corecore