8 research outputs found

    Quality, Risk and the Taleb Quadrants

    Get PDF
    Abstract The definition and the management of quality has evolved and assumed a variety of approaches, responding to an increased variety of needs. In industry, quality and its control has responded to the need of maintaining an industrial process operating as "expected", reducing the process sensitivity to uncontrolled disturbances (robustness) etc. By the same token, in services, quality has been defined as "satisfied customers obtaining the services they expect". Quality management, like risk management, has a general negative connotation, arising from the consequential effects of "non-quality". Quality, just as risk, is measured as a consequence resulting from factors and events defined in terms of the statistical characteristics that underlie these events. Quality and risk may thus converge, both conceptually and technically, expanding the concerns that both domains are confronted with and challenged by. In this paper, we analyze such a prospective convergence between quality and risk, and their management. In particular we emphasize aspects of integrated quality, risk, performance and cost in industry and services. Throughout such applications, we demonstrate alternative approaches to quality management, and their merging with risk management, in order to improve both the quality and risk management processes. In the analysis we apply the four quadrants proposed by Nassim Taleb for mapping consequential risks and their probability structure. Three case studies are provided, one on risk finance, a second one on risk management of telecommunication systems and a third one on quality and reliability of web based services

    Web services choreography testing using semantic service description

    Get PDF
    Web services have become popular due to their ability to integrate with and to interoperate heterogeneous applications. Several web services can be combined into a single application to meet the needs of users. In the course of web services selection, a web candidate service needs to conform to the behaviour of its client, and one way of ensuring this conformity is by testing the interaction between the web service and its user. The existing web services test approaches mainly focus on syntax-based web services description, whilst the semantic-based solutions mostly address composite process flow testing. The aim of this research is to provide an automated testing approach to support service selection during automatic web services composition using Web Service Modeling Ontology (WSMO). The research work began with understanding and analysing the existing test generation approaches for web services. Second, the weaknesses of the existing approaches were identified and addressed by utilizing the choreography transition rules of WSMO in an effort to generate a Finite State Machine (FSM). The FSM was then used to generate the working test cases. Third, a technique to generate an FSM from Abstract State Machine (ASM) was adapted to be used with WSMO. This thesis finally proposed a new testing model called the Choreography to Finite State Machine (C2FSM) to support the service selection of an automatic web service composition. It proposed new algorithms to automatically generate the test cases from the semantic description (WSMO choreography description). The proposed approach was then evaluated using the Amazon E-Commerce Web Service WSMO description. The quality of the test cases generated using the proposed approach was measured by assessing their mutation adequacy score. A total of 115 mutants were created based on 7 mutant operators. A mutation adequacy score of 0.713 was obtained. The experimental validation demonstrated a significant result in the sense that C2FSM provided an efficient and feasible solution. The result of this research could assist the service consumer agents in verifying the behaviour of the Web service in selecting appropriate services for web service composition

    On the quality of Web Services.

    Get PDF
    Web Services (WSs) are gaining increasing attention as programming components and so is their quality. WSs offer many benefits, like assured interoperability, and reusability. Conversely, they introduce a number of challenges as far as their quality is concerned, seen from the perspectives of two different stakeholders: (1) the developer/provider of WSs and (2) the consumer of WSs. Developers are usually concerned about the correctness of the WS's functionality which can be assessed by functional testing. Consumers of WSs are usually careful about the reliability of WSs they are depending on (in addition to other qualities). They need to know whether the WSs are available (i.e., up and running), accessible (i.e., they actually accept requests) while available and whether they successfully deliver responses for the incoming requests. Availability, Accessibility, and Successability of WSs are directly related to WS reliability. Assessing these three factors via testing is usually only feasible at late stages of the development life-cycle. If they can be predicted early during the development, they can provide valuable information that may positively influence the engineering of WSs with regards to their quality. In this thesis we focus on assessing the quality of WSs via testing and via prediction. Testing of WSs is addressed by an extensive systematic literature review that focuses on a special type of WSs, the semantic WSs. The main objective of the review is to capture the current state of the art of functional testing of semantic WSs and to identify possible approaches for deriving functional test cases from their requirement specifications. The review follows a predefined procedure that involves automatically searching 5 well-known digital libraries. After applying the selection criteria to the search results, a total of 34 studies were identified as relevant. Required information was extracted from the studies, synthesized and summarized. The results of the systematic literature review showed that it is possible to derive test cases from requirement specifications of semantic WSs based on the different testing approaches identified in the primary studies. In more than half of the identified approaches, test cases are derived from transformed specification models. Petri Nets (and its derivatives) is the mostly used transformation. To derive test cases, different techniques are applied to the specification models. Model checking is largely used for this purpose. Prediction of Availability, Accessibility, and Successability is addressed by a correlational study in which we focused on identifying possible relations between the quality attributes Availability, Accessibility, and Successability and other internal quality measures (e.g., cyclomatic complexity) that may allow building statistically significant predictive models for the three attributes. A total of 34 students interacted freely with 20 pre-selected WSs while internal and external quality measures are collected using a data collection framework designed and implemented specially for this purpose. The collected data are then analyzed using different statistical approaches. The correlational study conducted confirmed that it is possible to build statistically significant predictive models for Accessibility and Successability. A very large number of significant models was built using two different approaches, namely the binary logistic regression and the ordinal logistic regression. Many significant predictive models were selected out of the identified models based on special criteria that take into consideration the predictive power and the stability of the models. The selected models are validated using the bootstrap validation technique. The result of validation showed that only two models out of the selected models are well calibrated and expected to maintain their predictive power when applied to a future dataset. These two models are for predicting Accessibility based on the number of weighted methods (WM) and the number of lines of code (LOC) respectively. The approach and the findings presented in this work for building accurate predictive models for the WSs qualities Availability, Accessibility, and Successability may offer researchers and practitioners an opportunity to examine and build similar predictive models for other WSs qualities, thus allowing for early prediction of the targeted qualities and hence early adjustments during the development to satisfy any requirements imposed on the WSs with regards to the predicted qualities. Early prediction of WSs qualities may help leverage trust on the WSs and reduces development costs, hence increases their adoption

    Redefining and Evaluating Coverage Criteria Based on the Testing Scope

    Get PDF
    Test coverage information can help testers in deciding when to stop testing and in augmenting their test suites when the measured coverage is not deemed sufficient. Since the notion of a test criterion was introduced in the 70’s, research on coverage testing has been very active with much effort dedicated to the definition of new, more cost-effective, coverage criteria or to the adaptation of existing ones to a different domain. All these studies share the premise that after defining the entity to be covered (e.g., branches), one cannot consider a program to be adequately tested if some of its entities have never been exercised by any input data. However, it is not the case that all entities are of interest in every context. This is particularly true for several paradigms that emerged in the last decade (e.g., component-based development, service-oriented architecture). In such cases, traditional coverage metrics might not always provide meaningful information. In this thesis we address such situation and we redefine coverage criteria so to focus on the program parts that are relevant to the testing scope. We instantiate this general notion of scope-based coverage by introducing three coverage criteria and we demonstrate how they could be applied to different testing contexts. When applied to the context of software reuse, our approach proved to be useful for supporting test case prioritization, selection and minimization. Our studies showed that for prioritization we can improve the average rate of faults detected. For test case selection and minimization, we can considerably reduce the test suite size with small to no extra impact on fault detection effectiveness. When the source code is not available, such as in the service-oriented architecture paradigm, we propose an approach that customizes coverage, measured on invocations at service interface, based on data from similar users. We applied this approach to a real world application and, in our study, we were able to predict the entities that would be of interest for a given user with high precision. Finally, we introduce the first of its kind coverage criterion for operational profile based testing that exploits program spectra obtained from usage traces. Our study showed that it is better correlated than traditional coverage with the probability that the next test input will fail, which implies that our approach can provide a better stopping rule. Promising results were also observed for test case selection. Our redefinition of coverage criteria approaches the topic of coverage testing from a completely different angle. Such a novel perspective paves the way for new avenues of research towards improving the cost-effectiveness of testing, yet all to be explored

    Automated Realistic Test Input Generation and Cost Reduction in Service-centric System Testing

    Get PDF
    Service-centric System Testing (ScST) is more challenging than testing traditional software due to the complexity of service technologies and the limitations that are imposed by the SOA environment. One of the most important problems in ScST is the problem of realistic test data generation. Realistic test data is often generated manually or using an existing source, thus it is hard to automate and laborious to generate. One of the limitations that makes ScST challenging is the cost associated with invoking services during testing process. This thesis aims to provide solutions to the aforementioned problems, automated realistic input generation and cost reduction in ScST. To address automation in realistic test data generation, the concept of Service-centric Test Data Generation (ScTDG) is presented, in which existing services used as realistic data sources. ScTDG minimises the need for tester input and dependence on existing data sources by automatically generating service compositions that can generate the required test data. In experimental analysis, our approach achieved between 93% and 100% success rates in generating realistic data while state-of-the-art automated test data generation achieved only between 2% and 34%. The thesis addresses cost concerns at test data generation level by enabling data source selection in ScTDG. Source selection in ScTDG has many dimensions such as cost, reliability and availability. This thesis formulates this problem as an optimisation problem and presents a multi-objective characterisation of service selection in ScTDG, aiming to reduce the cost of test data generation. A cost-aware pareto optimal test suite minimisation approach addressing testing cost concerns during test execution is also presented. The approach adapts traditional multi-objective minimisation approaches to ScST domain by formulating ScST concerns, such as invocation cost and test case reliability. In experimental analysis, the approach achieved reductions between 69% and 98.6% in monetary cost of service invocations during testin

    Framework de integração para o modelo estratégico de colaboração e mineração de dados espaciais na WEB

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Civil, Florianópolis, 2011Após o levantamento da situação de alguns municípios brasileiros com relação a produção e ao tratamento de dados espaciais, ficou detectada a carência de infraestruturas, de informações e por consequência, a falta de mecanismos colaborativos com suporte a mineração de dados para análise espacial. As dificuldades aumentam com a disseminação de diferentes estruturas de dados espaciais a exemplo de padrões CAD/GIS produzidas através do rápido avanço das tecnologias de informação, sendo reais os desafios para implementação de uma infraestrutura interoperável e foco de várias discussões. Entretanto o acesso a esses dados via internet e os problemas ocasionados na troca dos mesmos estão relacionados diretamente a natureza particular de cada padrão adotado, por isso devem ser analisados e adequados para colaboração. Inicialmente a hipótese do trabalho visa intensificar a interoperabilidade entre dados espaciais e a integração de sistemas, tornando possível estabelecer canais de comunicação para um ambiente colaborativo visando ações potenciais e cooperativas. A partir disso, a pesquisa apresenta uma investigação sobre os aspectos relevantes que influenciam na engenharia de projetos, originando o desenvolvimento do protótipo denominado OpenCGFW (Collaborative Geospatial Framework Web), visando o reconhecimento de estruturas, integração, manipulação e colaboração, em sintonia com esforços da INDE, OGC e W3C. Inicialmente são realizados estudos e revisões sobre os assuntos diretamente relacionados à interoperabilidade. Também são abordados temas relacionados ao armazenamento, tratamento e colaboração computacional especificamente entre os dados geográficos produzidos por diferentes instituições públicas. Para construção do framework foi aplicado o método MCDA-C (Multicritério de Apoio à Decisão - Construtivista) para identificação dos aspectos fundamentais e elementares. A partir disso o trabalho também descreve os resultados obtidos na implementação das etapas de um padrão de projeto para apoiar nas atividades e na avaliação de geosoluções livres. Durante a discussão, são apresentados os resultados através experimentos e aplicações para mapas digitais na web visando a integração de várias bases de dados distribuídas ao cadastro técnico multifinalitário para uso das principais técnicas de mineração de dados espaciais. Ao final, o trabalho discute a hipótese e a contribuição da pesquisa, visando atender principalmente às características regionais, buscando contribuir para o avanço tecnológico do país ao intensificar o uso de padrões abertos e geotecnologias livres na colaboração e gestão do conhecimentoAfter surveying the situation in some municipals Brazilian with respect to production and processing of spatial data, it was detected the lack of infrastructure, of information, and therefore the lack of mechanisms to support collaborative for data mining and spatial analysis. The difficulties increase with the spread of different structures of spatial data standards like ie: CAD / GIS produced by the rapid advancement of information technology, and real challenges to implementation of an interoperable infrastructure and it focus of several discussions. However access to this data via the Internet and the problems caused in the same exchange are directly related to the particular nature of each standard adopted, so it they should must be analyzed and appropriate for collaboration. Initially, the hypothesis of the study aims to enhance interoperability between spatial data and systems integration, making it possible to establish communication channels for a collaborative environment aimed at potential and cooperative actions. From this, the study presents an investigation into the relevant aspects that influence the projects engineering, resulting in the development of the prototype called OpenCGFW (Collaborative Geospatial Framework Web), to the recognition of structures, integration, manipulation and collaboration, in tuning with efforts GSDI-INDE, OGC and W3C. Initially, studies and reviews on subjects directly related to interoperability. Are also discussed issues related to storage, processing between collaboration computational and specifically geographic data produced by different public institutions. For construction of the framework was applied MCDA-C method (Multicriteria Decision Aid - Constructivist) to identify the fundamental and elementary. From this work also describes the results obtained in implementing the steps of a design pattern to support the activities and evaluating free geo-solutions. During the discussion, are present the results through experiments and applications of the web mapping for digital maps to integrate multiple databases distributed of the multipurpose cadaster and use of the main techniques of spatial data mining. At the end, the work discusses the hypothesis and the contribution of research, mainly to meet the regional characteristics, seeking to contribute to the technological advancement of the country intensifying the use of open standards, the free geo-solutions collaboration and knowledge managemen
    corecore