71 research outputs found

    Uso de riscos na validação de sistemas baseados em componentes

    Get PDF
    Orientadores: Eliane Martins, Henrique Santos do Carmo MadeiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A sociedade moderna está cada vez mais dependente dos serviços prestados pelos computadores e, conseqüentemente, dependente do software que está sendo executado para prover estes serviços. Considerando a tendência crescente do desenvolvimento de produtos de software utilizando componentes reutilizáveis, a dependabilidade do software, ou seja, a segurança de que o software irá funcionar adequadamente, recai na dependabilidade dos componentes que são integrados. Os componentes são normalmente adquiridos de terceiros ou produzidos por outras equipes de desenvolvimento. Dessa forma, os critérios utilizados na fase de testes dos componentes dificilmente estão disponíveis. A falta desta informação aliada ao fato de se estar utilizando um componente que não foi produzido para o sistema e o ambiente computacional específico faz com que a reutilização de componentes apresente um risco para o sistema que os integra. Estudos tradicionais do risco de um componente de software definem dois fatores que caracteriza o risco, a probabilidade de existir uma falha no componente e o impacto que isso causa no sistema computacional. Este trabalho propõe o uso da análise do risco para selecionar pontos de injeção e monitoração para campanhas de injeção de falhas. Também propõe uma abordagem experimental para a avaliação do risco de um componente para um sistema. Para se estimar a probabilidade de existir uma falha no componente, métricas de software foram combinadas num modelo estatístico. O impacto da manifestação de uma falha no sistema foi estimado experimentalmente utilizando a injeção de falhas. Considerando esta abordagem, a avaliação do risco se torna genérica e repetível embasando-se em medidas bem definidas. Dessa forma, a metodologia pode ser utilizada como um benchmark de componentes quanto ao risco e pode ser utilizada quando é preciso escolher o melhor componente para um sistema computacional, entre os vários componentes que provêem a mesma funcionalidade. Os resultados obtidos na aplicação desta abordagem em estudos de casos nos permitiram escolher o melhor componente, considerando diversos objetivos e necessidades dos usuáriosAbstract: Today's societies have become increasingly dependent on information services. A corollary is that we have also become increasingly dependent on computer software products that provide such services. The increasing tendency of software development to employ reusable components means that software dependability has become even more reliant on the dependability of integrated components. Components are usually acquired from third parties or developed by unknown development teams. In this way, the criteria employed in the testing phase of components-based systems are hardly ever been available. This lack of information, coupled with the use of components that are not specifically developed for a particular system and computational environment, makes components reutilization risky for the integrating system. Traditional studies on the risk of software components suggest that two aspects must be considered when risk assessment tests are performed, namely the probability of residual fault in software component, and the probability of such fault activation and impact on the computational system. The present work proposes the use of risk analysis to select the injection and monitoring points for fault injection campaigns. It also proposes an experimental approach to evaluate the risk a particular component may represent to a system. In order to determine the probability of a residual fault in the component, software metrics are combined in a statistical mode!. The impact of fault activation is estimated using fault injection. Through this experimental approach, risk evaluation becomes replicable and buttressed on well-defined measurements. In this way, the methodology can be used as a components' risk benchmark, and can be employed when it is necessary to choose the most suitable among several functionally-similar components for a particular computational system. The results obtained in the application of this approach to specific case studies allowed us to choose the best component in each case, without jeopardizing the diverse objectives and needs of their usersDoutoradoDoutor em Ciência da Computaçã

    Delayed failure of software components using stochastic testing

    Get PDF
    The present research investigates the delayed failure of software components and addresses the problem that the conventional approach to software testing is unlikely to reveal this type of failure. Delayed failure is defined as a failure that occurs some time after the condition that causes the failure, and is a consequence of long-latency error propagation. This research seeks to close a perceived gap between academic research into software testing and industrial software testing practice by showing that stochastic testing can reveal delayed failure, and supporting this conclusion by a model of error propagation and failure that has been validated by experiment. The focus of the present research is on software components described by a request-response model. Within this conceptual framework, a Markov chain model of error propagation and failure is used to derive the expected delayed failure behaviour of software components. Results from an experimental study of delayed failure of DBMS software components MySQL and Oracle XE using stochastic testing with random generation of SQL are consistent with expected behaviour based on the Markov chain model. Metrics for failure delay and reliability are shown to depend on the characteristics of the chosen experimental profile. SQL mutation is used to generate negative as well as positive test profiles. There appear to be few systematic studies of delayed failure in the software engineering literature, and no studies of stochastic testing related to delayed failure of software components, or specifically to delayed failure of DBMS. Stochastic testing is shown to be an effective technique for revealing delayed failure of software components, as well as a suitable technique for reliability and robustness testing of software components. These results provide a deeper insight into the testing technique and should lead to further research. Stochastic testing could provide a dependability benchmark for component-based software engineering

    How Practical Are Intrusion-Tolerant Distributed Systems?

    Get PDF
    Building secure, inviolable systems using traditional mechanisms is becoming increasingly an unattainable goal. The recognition of this fact has fostered the interest in alternative approaches to security such as intrusion tolerance, which applies fault tolerance concepts and techniques to security problems. Albeit this area is quite promising, intrusion-tolerant distributed systems typically rely on the assumption that the system components fail or are compromised independently. This is a strong assumption that has been repeatedly questioned. In this paper we discuss how this assumption can be implemented in practice using diversity of system components. We present a taxonomy of axes of diversity and discuss how they provide failure independence. Furthermore, we provide a practical example of an intrusion-tolerant system built using diversity

    A survey on fault-models for QoS studies of service-oriented systems

    Get PDF
    This survey paper presents an overview of the fault-models available to the researcher who wants to parameterise system-models in order to study Quality- of-Service (QoS) properties of systems with service-oriented architecture. The concept of a system-model subsumes the whole spectrum between abstract mathematical models and testbeds based on actual implementations. Fault- models, on the other hand, are parameters to system-models. They introduce faults and disturbances into the system-model, thereby allowing the study of QoS under realistic conditions. In addition to a survey of existing fault- models, the paper also provides a discussion of available fault-classification schemes

    Design Development Test and Evaluation (DDT and E) Considerations for Safe and Reliable Human Rated Spacecraft Systems

    Get PDF
    A team directed by the NASA Engineering and Safety Center (NESC) collected methodologies for how best to develop safe and reliable human rated systems and how to identify the drivers that provide the basis for assessing safety and reliability. The team also identified techniques, methodologies, and best practices to assure that NASA can develop safe and reliable human rated systems. The results are drawn from a wide variety of resources, from experts involved with the space program since its inception to the best-practices espoused in contemporary engineering doctrine. This report focuses on safety and reliability considerations and does not duplicate or update any existing references. Neither does it intend to replace existing standards and policy

    Improving the process of analysis and comparison of results in dependability benchmarks for computer systems

    Full text link
    Tesis por compendioLos dependability benchmarks (o benchmarks de confiabilidad en español), están diseñados para evaluar, mediante la categorización cuantitativa de atributos de confiabilidad y prestaciones, el comportamiento de sistemas en presencia de fallos. En este tipo de benchmarks, donde los sistemas se evalúan en presencia de perturbaciones, no ser capaces de elegir el sistema que mejor se adapta a nuestras necesidades puede, en ocasiones, conllevar graves consecuencias (económicas, de reputación, o incluso de pérdida de vidas). Por esa razón, estos benchmarks deben cumplir ciertas propiedades, como son la no-intrusión, la representatividad, la repetibilidad o la reproducibilidad, que garantizan la robustez y precisión de sus procesos. Sin embargo, a pesar de la importancia que tiene la comparación de sistemas o componentes, existe un problema en el ámbito del dependability benchmarking relacionado con el análisis y la comparación de resultados. Mientras que el principal foco de investigación se ha centrado en el desarrollo y la mejora de procesos para obtener medidas en presencia de fallos, los aspectos relacionados con el análisis y la comparación de resultados quedaron mayormente desatendidos. Esto ha dado lugar a diversos trabajos en este ámbito donde el proceso de análisis y la comparación de resultados entre sistemas se realiza de forma ambigua, mediante argumentación, o ni siquiera queda reflejado. Bajo estas circunstancias, a los usuarios de los benchmarks se les presenta una dificultad a la hora de utilizar estos benchmarks y comparar sus resultados con los obtenidos por otros usuarios. Por tanto, extender la aplicación de los benchmarks de confiabilidad y realizar la explotación cruzada de resultados es una tarea actualmente poco viable. Esta tesis se ha centrado en el desarrollo de una metodología para dar soporte a los desarrolladores y usuarios de benchmarks de confiabilidad a la hora de afrontar los problemas existentes en el análisis y comparación de resultados. Diseñada para asegurar el cumplimiento de las propiedades de estos benchmarks, la metodología integra el proceso de análisis de resultados en el flujo procedimental de los benchmarks de confiabilidad. Inspirada en procedimientos propios del ámbito de la investigación operativa, esta metodología proporciona a los evaluadores los medios necesarios para hacer su proceso de análisis explícito, y más representativo para el contexto dado. Los resultados obtenidos de aplicar esta metodología en varios casos de estudio de distintos dominios de aplicación, mostrará las contribuciones de este trabajo a mejorar el proceso de análisis y comparación de resultados en procesos de evaluación de la confiabilidad para sistemas basados en computador.Dependability benchmarks are designed to assess, by quantifying through quantitative performance and dependability attributes, the behavior of systems in presence of faults. In this type of benchmarks, where systems are assessed in presence of perturbations, not being able to select the most suitable system may have serious implications (economical, reputation or even lost of lives). For that reason, dependability benchmarks are expected to meet certain properties, such as non-intrusiveness, representativeness, repeatability or reproducibility, that guarantee the robustness and accuracy of their process. However, despite the importance that comparing systems or components has, there is a problem present in the field of dependability benchmarking regarding the analysis and comparison of results. While the main focus in this field of research has been on developing and improving experimental procedures to obtain the required measures in presence of faults, the processes involving the analysis and comparison of results were mostly unattended. This has caused many works in this field to analyze and compare results of different systems in an ambiguous way, as the process followed in the analysis is based on argumentation, or not even present. Hence, under these circumstances, benchmark users will have it difficult to use these benchmarks and compare their results with those from others. Therefore extending the application of these dependability benchmarks and perform cross-exploitation of results among works is not likely to happen. This thesis has focused on developing a methodology to assist dependability benchmark performers to tackle the problems present in the analysis and comparison of results of dependability benchmarks. Designed to guarantee the fulfillment of dependability benchmark's properties, this methodology seamlessly integrates the process of analysis of results within the procedural flow of a dependability benchmark. Inspired on procedures taken from the field of operational research, this methodology provides evaluators with the means not only to make their process of analysis explicit to anyone, but also more representative for the context being. The results obtained from the application of this methodology to several case studies in different domains, will show the actual contributions of this work to improving the process of analysis and comparison of results in dependability benchmarking for computer systems.Els dependability benchmarks (o benchmarks de confiabilitat, en valencià), són dissenyats per avaluar, mitjançant la categorització quantitativa d'atributs de confiabilitat i prestacions, el comportament de sistemes en presència de fallades. En aquest tipus de benchmarks, on els sistemes són avaluats en presència de pertorbacions, el no ser capaços de triar el sistema que millor s'adapta a les nostres necessitats pot tenir, de vegades, greus conseqüències (econòmiques, de reputació, o fins i tot pèrdua de vides). Per aquesta raó, aquests benchmarks han de complir certes propietats, com són la no-intrusió, la representativitat, la repetibilitat o la reproductibilitat, que garanteixen la robustesa i precisió dels seus processos. Així i tot, malgrat la importància que té la comparació de sistemes o components, existeix un problema a l'àmbit del dependability benchmarking relacionat amb l'anàlisi i la comparació de resultats. Mentre que el principal focus d'investigació s'ha centrat en el desenvolupament i la millora de processos per a obtenir mesures en presència de fallades, aquells aspectes relacionats amb l'anàlisi i la comparació de resultats es van desatendre majoritàriament. Açò ha donat lloc a diversos treballs en aquest àmbit on els processos d'anàlisi i comparació es realitzen de forma ambigua, mitjançant argumentació, o ni tan sols queden reflectits. Sota aquestes circumstàncies, als usuaris dels benchmarks se'ls presenta una dificultat a l'hora d'utilitzar aquests benchmarks i comparar els seus resultats amb els obtinguts per altres usuaris. Per tant, estendre l'aplicació dels benchmarks de confiabilitat i realitzar l'explotació creuada de resultats és una tasca actualment poc viable. Aquesta tesi s'ha centrat en el desenvolupament d'una metodologia per a donar suport als desenvolupadors i usuaris de benchmarks de confiabilitat a l'hora d'afrontar els problemes existents a l'anàlisi i comparació de resultats. Dissenyada per a assegurar el compliment de les propietats d'aquests benchmarks, la metodologia integra el procés d'anàlisi de resultats en el flux procedimental dels benchmarks de confiabilitat. Inspirada en procediments propis de l'àmbit de la investigació operativa, aquesta metodologia proporciona als avaluadors els mitjans necessaris per a fer el seu procés d'anàlisi explícit, i més representatiu per al context donat. Els resultats obtinguts d'aplicar aquesta metodologia en diversos casos d'estudi de distints dominis d'aplicació, mostrarà les contribucions d'aquest treball a millorar el procés d'anàlisi i comparació de resultats en processos d'avaluació de la confiabilitat per a sistemes basats en computador.Martínez Raga, M. (2018). Improving the process of analysis and comparison of results in dependability benchmarks for computer systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/111945TESISCompendi

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases
    • …
    corecore