29 research outputs found

    PROVIDING A FUZZY SYSTEM FOR EVALUATING AND COMBINING SERVICES IN THE SERVICE-ORIENTED ARCHITECTURE

    Get PDF
    Abstract. Today, service oriented architecture is recognized as an effective way for organizations to select,evaluate, and combine services, including key activities that take place in different phases of the service life cycle of the service oriented architecture. Service evaluation is one of the key activities in implementing a successful service project. Our goal is to assess the appropriateness of the services identified and the choice of service means using specific techniques to select a service from a set of client profiles. In this research, we are looking at how to use fuzzy logic to evaluate a set of suggested services and combine them. In order to adapt the results of the research with actual values, actual data was used.In this paper we were able to work with the actual data by presenting a suitable combination method to achieve this goal and then, by testing this method with actual data, we were able to evaluate the efficiency of the proposed algorithm, and it was ound that this algorithm has the highest accuracy in choosing the optimal combination of services.Keywords: Fuzzy logic, service evaluation, service mixing, service oriented architecture, service selection

    Spatio-temporal architecture-based framework for testing services in the cloud

    Get PDF
    Increasingly, various services are deployed and orchestrated in the cloud to form global, large-scale systems. The global distribution, high complexity, and physical separation pose new challenges into the quality assurance of such complex services. One major challenge is that they are intricately connected with the spatial and temporal characteristics of the domains they support. In this paper, we present our visions on the integration of spatial and temporal logic into the system design and quality maintenance of the complex services in the cloud. We suggest that new paradigms should be proposed for designing software architecture that will particularly embed the spatial and temporal properties of the cloud services, and new testing methodologies should be developed based on architecture including spatio-temporal aspects. We also discuss several potential directions in the relevant research

    Reliability Analysis of Component-Based Systems with Multiple Failure Modes

    Get PDF
    This paper presents a novel approach to the reliability modeling and analysis of a component-based system that allows dealing with multiple failure modes and studying the error propagation among components. The proposed model permits to specify the components attitude to produce, propagate, transform or mask different failure modes. These component-level reliability specifications together with information about systems global structure allow precise estimation of reliability properties by means of analytical closed formulas, probabilistic modelchecking or simulation methods. To support the rapid identification of components that could heavily affect systems reliability, we also show how our modeling approach easily support the automated estimation of the system sensitivity to variations in the reliability properties of its components. The results of this analysis allow system designers and developers to identify critical components where it is worth spending additional improvement efforts

    A Large-Scale Industrial Case Study on Architecture-Based Software Reliability Analysis

    Full text link
    Abstract—Architecture-based software reliability analysis methods shall help software architects to identify critical software components and to quantify their influence on the system reliability. Although researchers have proposed more than 20 methods in this area, empirical case studies applying these methods on large-scale industrial systems are rare. The costs and benefits of these methods remain unknown. On this behalf, we have applied the Cheung method on the software architecture of an industrial control system from ABB consisting of more than 100 components organized in nine subsystems with more than three million lines of code. We used the Littlewood/Verrall model to estimate subsystems failure rates and logging data to derive subsystem transition probabilities. We constructed a discrete time Markov chain as an architectural model and conducted a sensitivity analysis. This paper summarizes our experiences and lessons learned. We found that architecture-based software reliability analysis is still difficult to apply and that more effective data collection techniques are required. Keywords-Software reliability growth, software architecture, Markov processes I

    Strategy for scalable scenarios modeling and calculation in early software reliability engineering

    Get PDF
    System scenarios derived from requirements specification play an important role in the early software reliability engineering. A great deal of research effort has been devoted to predict reliability of a system at early design stages. The existing approaches are unable to handle scalability and calculation of scenarios reliability for large systems. This paper proposes modeling of scenarios in a scalable way by using a scenario language that describes system scenarios in a compact and concise manner which can results in a reduced number of scenarios. Furthermore, it proposes a calculation strategy to achieve better traceability of scenarios, and avoid computational complexity. The scenarios are pragmatically modeled and translated to finite state machines, where each state machine represents the behaviour of component instance within the scenario. The probability of failure of each component exhibited in the scenario is calculated separately based on the finite state machines. Finally, the reliability of the whole scenario is calculated based on the components’ behaviour models and their failure information using modified mathematical formula. In this paper, an example related to a case study of an automated railcar system is used to verify and validate the proposed strategy for scalability of system modeling

    Large scale software test data generation based on collective constraint and weighted combination method

    Get PDF
    Ispitivanje pouzdanosti softvera znači ispitivanje softvera kako bi se provjerilo da li udovoljava zahtjevima pouzdanosti i kako bi se procijenio njegov stupanj pouzdanosti. Statistički temeljeno ispitivanje pouzdanosti softvera općenito uključuje tri dijela: izgradnju modela, generiranje ispitnih podataka i ispitivanje. Stvaranje modela upotrebe softvera treba što je više moguće odražavati korisnikovu stvarnu primjenu. Potreban je ogroman broj ispitivanih slučajeva da bi se zadovoljila distribucija vjerojatnoće u slučaju stvarne upotrebe; inače će ispitivanje pouzdanosti izgubiti originalno značenje. U ovom radu najprije predlažemo novu metodu strukturiranja modela primjene softvera zasnovanu na modulima i heurističkoj metodi koja se temelji na ograničenjima. Zatim predlažemo metodu za generiranje podataka za ispitivanje uzimajući u obzir kombinaciju i težinu ulaznih podataka što smanjuje veliki broj mogućih kombinacija ulaznih varijabli na samo nekoliko reprezentativnih i povećava praktičnost primjene ispitne metode. U svrhu provjere učinkovitosti metode predložene u ovom radu, organizirane su četiri grupe eksperimenata. Ispravnost odgovarajućeg indeksa (GFI- goodness of fit index) pokazuje da je predložena metoda bliža upotrebi aktualnog softvera; također smo ustanovili da ima bolju pokrivenost kod uporabe Java Pathfinder-a za analizu četiri niza pokrivenosti internog koda.Software reliability test is to test software with the purpose of verifying whether the software achieves reliability requirements and evaluating software reliability level. Statistical-based software reliability testing generally includes three parts: building usage model, test data generation and testing. The construction of software usage model should reflect user\u27s real use as far as possible. A huge number of test cases are required to satisfy the probability distribution of the actual usage situation; otherwise, the reliability test will lose its original meaning. In this paper, we first propose a new method of structuring software usage model based on modules and constraint-based heuristic method. Then we propose a method for the testing data generation in consideration of the combination and weight of the input data, which reduces a large number of possible combinations of input variables to a few representative ones and improves the practicability of the testing method. To verify the effectiveness of the method proposed in this paper, four groups of experiments are organized. The goodness of fit index (GFI) shows that the proposed method is closer to the actual software use; we also found that the method proposed in this paper has a better coverage by using Java Pathfinder to analyse the four sets of internal code coverage
    corecore