6,437 research outputs found

    Automatic Generation of Basis Component Path Coverage for Software Architecture Testing

    Get PDF
    Architecture-centric development is one of the most promising methods for improving software quality, reducing software cost and raising software productivity. Software architecture research not only focuses on the design phase, but also covers every phase of software life cycle. Software architecture has characteristics different from traditional software, conventional testing methods do not apply directly to software architecture. Basis path testing is a very simple and efficient white-box testing method. Traditional methods generate basis path according to the control flow graph, they are not suitable for generating component path when we detect more software architecture errors. This paper presents a new concept - Basis Component Path (BCP) for C2-style architecture, and proposes a method to generate the BCPs. C2-style architecture is represented by components, connectors, and interfaces, and uses an architecture component interaction graph (CIG) to describe interface connection relationship. We also provide an algorithm to generate BCP set. Experiments apply the proposed method in a typical C2-style architecture and the result shows that the proposed method can generate BCP set which contains as many BCPs as possible efficiently, and it meets the requirements of the basis component path testing

    Quantifying software architecture attributes

    Get PDF
    Software architecture holds the promise of advancing the state of the art in software engineering. The architecture is emerging as the focal point of many modem reuse/evolutionary paradigms, such as Product Line Engineering, Component Based Software Engineering, and COTS-based software development. The author focuses his research work on characterizing some properties of a software architecture. He tries to use software metrics to represent the error propagation probabilities, change propagation probabilities, and requirements change propagation probabilities of a software architecture. Error propagation probability reflects the probability that an error that arises in one component of the architecture will propagate to other components of the architecture at run-time. Change propagation probability reflects, for a given pair of components A and B, the probability that if A is changed in a corrective/perfective maintenance operation, B has to be changed to maintain the overall function the system. Requirements change propagation probability reflects the likelihood that a requirement change that arises in one component of the architecture propagates to other components. For each case, the author presents the analytical formulas which mainly based on statistical theory and empirical studies. Then the author studies the correlations between analytical results and empirical results. The author also uses several metrics to quantify the properties of a Product Line Architecture, such as scoping, variability, commonality, and applicability. He presents his proposed means to measure the properties and the results of the case studies

    Better, Faster, Stronger Sequence Tagging Constituent Parsers

    Get PDF
    Sequence tagging models for constituent parsing are faster, but less accurate than other types of parsers. In this work, we address the following weaknesses of such constituent parsers: (a) high error rates around closing brackets of long constituents, (b) large label sets, leading to sparsity, and (c) error propagation arising from greedy decoding. To effectively close brackets, we train a model that learns to switch between tagging schemes. To reduce sparsity, we decompose the label set and use multi-task learning to jointly learn to predict sublabels. Finally, we mitigate issues from greedy decoding through auxiliary losses and sentence-level fine-tuning with policy gradient. Combining these techniques, we clearly surpass the performance of sequence tagging constituent parsers on the English and Chinese Penn Treebanks, and reduce their parsing time even further. On the SPMRL datasets, we observe even greater improvements across the board, including a new state of the art on Basque, Hebrew, Polish and Swedish.Comment: NAACL 2019 (long papers). Contains corrigendu

    Error propagation metrics from XMI

    Get PDF
    This work describes the production of an application Error Propagation Metrics from XMI which can extract process and display software design metrics from XMI files. The tool archives these design metrics in a standard XML format defined by a metric document type definition.;XMI is a flavour of XML allowing the description of UML models. As such, the XMI representation of a software design will include information from which a variety of software design metrics can be extracted. These metrics are potentially useful in improving the software design process, either throughout the early stages of design if a suitable XMI-enabled modelling tool is deployed, or to enable the comparison of completed software projects, by extracting design metrics from UML models reverse engineered from the implemented source code.;The tool is able to derive the error propagation of metrics from test XMI files created from UML sequence and state diagrams and from reverse engineered Java source code. However, variation was observed between the XMI representations generated by different software design tools, limiting the ability of the tool to process XMI from all sources. Furthermore, it was noted that subtle differences between UML design representations might have a marked effect on the quality of metrics derived.;In conclusion in order to validate the usefulness of these metrics that can be extracted from XMI files it would be useful to follow well-documented design projects throughout the total design and implementation process. Alternatively, the tool might be used to compare metrics from well-matched design implementations. In either case design metrics will only be of true value to software engineers if they can be associated empirically with a validated measure of system quality

    Uso de riscos na validação de sistemas baseados em componentes

    Get PDF
    Orientadores: Eliane Martins, Henrique Santos do Carmo MadeiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: A sociedade moderna está cada vez mais dependente dos serviços prestados pelos computadores e, conseqüentemente, dependente do software que está sendo executado para prover estes serviços. Considerando a tendência crescente do desenvolvimento de produtos de software utilizando componentes reutilizáveis, a dependabilidade do software, ou seja, a segurança de que o software irá funcionar adequadamente, recai na dependabilidade dos componentes que são integrados. Os componentes são normalmente adquiridos de terceiros ou produzidos por outras equipes de desenvolvimento. Dessa forma, os critérios utilizados na fase de testes dos componentes dificilmente estão disponíveis. A falta desta informação aliada ao fato de se estar utilizando um componente que não foi produzido para o sistema e o ambiente computacional específico faz com que a reutilização de componentes apresente um risco para o sistema que os integra. Estudos tradicionais do risco de um componente de software definem dois fatores que caracteriza o risco, a probabilidade de existir uma falha no componente e o impacto que isso causa no sistema computacional. Este trabalho propõe o uso da análise do risco para selecionar pontos de injeção e monitoração para campanhas de injeção de falhas. Também propõe uma abordagem experimental para a avaliação do risco de um componente para um sistema. Para se estimar a probabilidade de existir uma falha no componente, métricas de software foram combinadas num modelo estatístico. O impacto da manifestação de uma falha no sistema foi estimado experimentalmente utilizando a injeção de falhas. Considerando esta abordagem, a avaliação do risco se torna genérica e repetível embasando-se em medidas bem definidas. Dessa forma, a metodologia pode ser utilizada como um benchmark de componentes quanto ao risco e pode ser utilizada quando é preciso escolher o melhor componente para um sistema computacional, entre os vários componentes que provêem a mesma funcionalidade. Os resultados obtidos na aplicação desta abordagem em estudos de casos nos permitiram escolher o melhor componente, considerando diversos objetivos e necessidades dos usuáriosAbstract: Today's societies have become increasingly dependent on information services. A corollary is that we have also become increasingly dependent on computer software products that provide such services. The increasing tendency of software development to employ reusable components means that software dependability has become even more reliant on the dependability of integrated components. Components are usually acquired from third parties or developed by unknown development teams. In this way, the criteria employed in the testing phase of components-based systems are hardly ever been available. This lack of information, coupled with the use of components that are not specifically developed for a particular system and computational environment, makes components reutilization risky for the integrating system. Traditional studies on the risk of software components suggest that two aspects must be considered when risk assessment tests are performed, namely the probability of residual fault in software component, and the probability of such fault activation and impact on the computational system. The present work proposes the use of risk analysis to select the injection and monitoring points for fault injection campaigns. It also proposes an experimental approach to evaluate the risk a particular component may represent to a system. In order to determine the probability of a residual fault in the component, software metrics are combined in a statistical mode!. The impact of fault activation is estimated using fault injection. Through this experimental approach, risk evaluation becomes replicable and buttressed on well-defined measurements. In this way, the methodology can be used as a components' risk benchmark, and can be employed when it is necessary to choose the most suitable among several functionally-similar components for a particular computational system. The results obtained in the application of this approach to specific case studies allowed us to choose the best component in each case, without jeopardizing the diverse objectives and needs of their usersDoutoradoDoutor em Ciência da Computaçã

    Reconstruction of Software Component Architectures and Behaviour Models using Static and Dynamic Analysis

    Get PDF
    Model-based performance prediction systematically deals with the evaluation of software performance to avoid for example bottlenecks, estimate execution environment sizing, or identify scalability limitations for new usage scenarios. Such performance predictions require up-to-date software performance models. This book describes a new integrated reverse engineering approach for the reconstruction of parameterised software performance models (software component architecture and behaviour)

    A Hierarchical Framework for Estimating Heterogeneous Architecture-based Software Reliability

    Get PDF
    Problem. The composite model approach that follows a DTMC process with constant failure rate is not analytically tractable for improving its method of solution for estimating software reliability. In this case, a hierarchical approach is preferred to improve accuracy for the method of solution for estimating reliability. Very few studies have been conducted on heterogeneous architecture-based software reliability, and those that have been done use the composite model for reliability estimation. To my knowledge, no research has been done where a hierarchical approach is taken to estimate heterogeneous architecture-based software reliability. This paper explores the use and effectiveness of a hierarchical framework to estimate heterogeneous architecture-based software reliability. -- Method. Concepts of reliability and reliability prediction models for heterogeneous software architecture were surveyed. The different architectural styles were identified as batch-sequential, parallel filter, fault tolerance, and call and return. A method for evaluating these four styles solely on the basis of transition probability was proposed. Four case studies were selected from similar researches which have been done to test the effectiveness of the proposed hierarchical framework. The study assumes that the method of extracting the information about the software architecture was accurate and that the actual reliability of the systems used were free of software errors. -- Results. The percentage difference in results of the reliability estimated by the proposed hierarchical framework compared with the actual reliability was 5.12%, 11.09%, 0.82%, and 52.14% for Cases 1, 2, 3, and 4 respectively. The proposed hierarchical framework did not work for Case 4, which showed much higher values in component utilization and therefore higher interactions between components when compared with the other cases. -- Conclusions. The proposed hierarchical framework generally showed close comparison with the actual reliability of the software systems used in the case studies. However, the results obtained by the proposed hierarchical framework compared to the actual reliability were in disagreement for Case 4. This is due to the higher component interactions in Case 4 when compared with other cases and showed that there are limitations to the extent to which the proposed hierarchical framework can be applied. The reasoning for the limitations of the hierarchical approach has not been cited in any research on the subject matter. Even with the limitations, the hierarchical framework for estimating heterogeneous architecture-based software reliability can still be applied when high accuracy is not required and not too high interactions among components in the software system exist. Thesis (M.S.) -- Andrews University, College of Arts and Sciences, 201

    Exception handling in the development of fault-tolerant component-based systems

    Get PDF
    Orientador: Cecilia Mary Fischer RubiraTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Mecanismos de tratamento de exceções foram concebidos com o intuito de facilitar o gerenciamento da complexidade de sistemas de software tolerantes a falhas. Eles promovem uma separação textual explícita entre o código normal e o código que lida com situações anormais, afim de dar suporte a construção de programas que são mais concisos fáceis de evoluir e confáveis. Diversas linguagens de programação modernas e a maioria dos modelos de componentes implementam mecanismos de tratamento de exceções. Apesar de seus muitos benefícios, tratamento de exceções pode ser a fonte de diversas falhas de projeto se usado de maneira indisciplinada. Estudos recentes mostram que desenvolvedores de sistemas de grande escala baseados em infra-estruturas de componentes têm hábitos, no tocante ao uso de tratamento de exceções, que tornam suas aplicações vulneráveis a falhas e difíceis de se manter. Componentes de software criam novos desafios com os quais mecanismos de tratamento de exceções tradicionais não lidam, o que aumenta a probabilidade de que problemas ocorram. Alguns exemplos são indisponibilidade de código fonte e incompatibilidades arquiteturais. Neste trabalho propomos duas técnicas complementares centradas em tratamento de exceções para a construção de sistemas tolerantes a falhas baseados em componentes. Ambas têm ênfase na estrutura do sistema como um meio para se reduzir o impacto de mecanismos de tolerância a falhas em sua complexidade total e o número de falhas de projeto decorrentes dessa complexidade. A primeira é uma abordagem para o projeto arquitetural dos mecanismos de recuperação de erros de um sistema. Ela trata do problema de verificar se uma arquitetura de software satisfaz certas propriedades relativas ao fluxo de exceções entre componentes arquiteturais, por exemplo, se todas as exceções lançadas no nível arquitetural são tratadas. A abordagem proposta lança de diversas ferramentas existentes para automatizar ao máximo esse processo. A segunda consiste em aplicar programação orientada a aspectos (AOP) afim de melhorar a modularização de código de tratamento de exceções. Conduzimos um estudo aprofundado com o objetivo de melhorar o entendimento geral sobre o efeitos de AOP no código de tratamento de exceções e identificar as situações onde seu uso é vantajoso e onde não éAbstract: Exception handling mechanisms were conceived as a means to help managing the complexity of fault-tolerant software. They promote an explicit textual separation between normal code and the code that deals with abnormal situations, in order to support the construction of programs that are more concise, evolvable, and reliable. Several mainstream programming languages and most of the existing component models implement exception handling mechanisms. In spite of its many bene?ts, exception handling can be a source of many design faults if used in an ad hoc fashion. Recent studies show that developers of large-scale software systems based on component infrastructures have habits concerning the use of exception handling that make applications vulnerable to faults and hard to maintain. Software components introduce new challenges which are not addressed by traditional exception handling mechanisms and increase the chances of problems occurring. Examples include unavailability of source code and architectural mismatches. In this work, we propose two complementary techniques centered on exception handling for the construction of fault-tolerant component-based systems. Both of them emphasize system structure as a means to reduce the impactof fault tolerance mechanisms on the overall complexity of a software system and the number of design faults that stem from complexity. The ?rst one is an approach for the architectural design of a system?s error handling capabilities. It addresses the problem of verifying whether a software architecture satis?es certain properties of interest pertaining the ?ow of exceptions between architectural components, e.g., if all the exceptions signaled at the architectural level are eventually handled. The proposed approach is based on a set of existing tools that automate this process as much as possible. The second one consists in applying aspect-oriented programming (AOP) to better modularize exception handling code. We have conducted a through study aimed at improving our understanding of the efects of AOP on exception handling code and identifying the situations where its use is advantageous and the ones where it is notDoutoradoDoutor em Ciência da Computaçã
    corecore