54 research outputs found

    On the quality of Web Services.

    Get PDF
    Web Services (WSs) are gaining increasing attention as programming components and so is their quality. WSs offer many benefits, like assured interoperability, and reusability. Conversely, they introduce a number of challenges as far as their quality is concerned, seen from the perspectives of two different stakeholders: (1) the developer/provider of WSs and (2) the consumer of WSs. Developers are usually concerned about the correctness of the WS's functionality which can be assessed by functional testing. Consumers of WSs are usually careful about the reliability of WSs they are depending on (in addition to other qualities). They need to know whether the WSs are available (i.e., up and running), accessible (i.e., they actually accept requests) while available and whether they successfully deliver responses for the incoming requests. Availability, Accessibility, and Successability of WSs are directly related to WS reliability. Assessing these three factors via testing is usually only feasible at late stages of the development life-cycle. If they can be predicted early during the development, they can provide valuable information that may positively influence the engineering of WSs with regards to their quality. In this thesis we focus on assessing the quality of WSs via testing and via prediction. Testing of WSs is addressed by an extensive systematic literature review that focuses on a special type of WSs, the semantic WSs. The main objective of the review is to capture the current state of the art of functional testing of semantic WSs and to identify possible approaches for deriving functional test cases from their requirement specifications. The review follows a predefined procedure that involves automatically searching 5 well-known digital libraries. After applying the selection criteria to the search results, a total of 34 studies were identified as relevant. Required information was extracted from the studies, synthesized and summarized. The results of the systematic literature review showed that it is possible to derive test cases from requirement specifications of semantic WSs based on the different testing approaches identified in the primary studies. In more than half of the identified approaches, test cases are derived from transformed specification models. Petri Nets (and its derivatives) is the mostly used transformation. To derive test cases, different techniques are applied to the specification models. Model checking is largely used for this purpose. Prediction of Availability, Accessibility, and Successability is addressed by a correlational study in which we focused on identifying possible relations between the quality attributes Availability, Accessibility, and Successability and other internal quality measures (e.g., cyclomatic complexity) that may allow building statistically significant predictive models for the three attributes. A total of 34 students interacted freely with 20 pre-selected WSs while internal and external quality measures are collected using a data collection framework designed and implemented specially for this purpose. The collected data are then analyzed using different statistical approaches. The correlational study conducted confirmed that it is possible to build statistically significant predictive models for Accessibility and Successability. A very large number of significant models was built using two different approaches, namely the binary logistic regression and the ordinal logistic regression. Many significant predictive models were selected out of the identified models based on special criteria that take into consideration the predictive power and the stability of the models. The selected models are validated using the bootstrap validation technique. The result of validation showed that only two models out of the selected models are well calibrated and expected to maintain their predictive power when applied to a future dataset. These two models are for predicting Accessibility based on the number of weighted methods (WM) and the number of lines of code (LOC) respectively. The approach and the findings presented in this work for building accurate predictive models for the WSs qualities Availability, Accessibility, and Successability may offer researchers and practitioners an opportunity to examine and build similar predictive models for other WSs qualities, thus allowing for early prediction of the targeted qualities and hence early adjustments during the development to satisfy any requirements imposed on the WSs with regards to the predicted qualities. Early prediction of WSs qualities may help leverage trust on the WSs and reduces development costs, hence increases their adoption

    Declarative techniques for modeling and mining business processes..

    Get PDF
    Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geïnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank Société Générale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon één effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiële derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrA�CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiële negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiële negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geïmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiële negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.

    Web services choreography testing using semantic service description

    Get PDF
    Web services have become popular due to their ability to integrate with and to interoperate heterogeneous applications. Several web services can be combined into a single application to meet the needs of users. In the course of web services selection, a web candidate service needs to conform to the behaviour of its client, and one way of ensuring this conformity is by testing the interaction between the web service and its user. The existing web services test approaches mainly focus on syntax-based web services description, whilst the semantic-based solutions mostly address composite process flow testing. The aim of this research is to provide an automated testing approach to support service selection during automatic web services composition using Web Service Modeling Ontology (WSMO). The research work began with understanding and analysing the existing test generation approaches for web services. Second, the weaknesses of the existing approaches were identified and addressed by utilizing the choreography transition rules of WSMO in an effort to generate a Finite State Machine (FSM). The FSM was then used to generate the working test cases. Third, a technique to generate an FSM from Abstract State Machine (ASM) was adapted to be used with WSMO. This thesis finally proposed a new testing model called the Choreography to Finite State Machine (C2FSM) to support the service selection of an automatic web service composition. It proposed new algorithms to automatically generate the test cases from the semantic description (WSMO choreography description). The proposed approach was then evaluated using the Amazon E-Commerce Web Service WSMO description. The quality of the test cases generated using the proposed approach was measured by assessing their mutation adequacy score. A total of 115 mutants were created based on 7 mutant operators. A mutation adequacy score of 0.713 was obtained. The experimental validation demonstrated a significant result in the sense that C2FSM provided an efficient and feasible solution. The result of this research could assist the service consumer agents in verifying the behaviour of the Web service in selecting appropriate services for web service composition

    Parameter dependencies for reusable performance specifications of software components

    Get PDF
    To avoid design-related per­for­mance problems, model-driven performance prediction methods analyse the response times, throughputs, and re­source utilizations of software architectures before and during implementation. This thesis proposes new modeling languages and according model transformations, which allow a reusable description of usage profile dependencies to the performance of software components. Predictions based on this new methods can support performance-related design decisions

    Theory and practice of the ternary relations model of information management

    Get PDF
    This thesis proposes a new, highly generalised and fundamental, information-modelling framework called the TRM (Ternary Relations Model). The TRM was designed to be a model for converging a number of differing paradigms of information management, some of which are quite isolated. These include areas such as: hypertext navigation; relational databases; semi-structured databases; the Semantic Web; ZigZag and workflow modelling. While many related works model linking by the connection of two ends, the TRM adds a third element to this, thereby enriching the links with associative meanings. The TRM is a formal description of a technique that establishes bi-directional and dynamic node-link structures in which each link is an ordered triple of three other nodes. The key features that makes the TRM distinct from other triple-based models (such as RDF) is the integration of bi-directionality, functional links and simplicity in the definition and elements hierarchy. There are two useful applications of the TRM. Firstly it may be used as a tool for the analysis of information models, to elucidate connections and parallels. Secondly, it may be used as a “construction kit” to build new paradigms and/or applications in information management. The TRM may be used to provide a substrate for building diverse systems, such as adaptive hypertext, schemaless database, query languages, hyperlink models and workflow management systems. It is, however, highly generalised and is by no means limited to these purposes

    Theory and practice of the ternary relations model of information management

    Get PDF
    This thesis proposes a new, highly generalised and fundamental, information-modelling framework called the TRM (Ternary Relations Model). The TRM was designed to be a model for converging a number of differing paradigms of information management, some of which are quite isolated. These include areas such as: hypertext navigation; relational databases; semi-structured databases; the Semantic Web; ZigZag and workflow modelling. While many related works model linking by the connection of two ends, the TRM adds a third element to this, thereby enriching the links with associative meanings. The TRM is a formal description of a technique that establishes bi-directional and dynamic node-link structures in which each link is an ordered triple of three other nodes. The key features that makes the TRM distinct from other triple-based models (such as RDF) is the integration of bi-directionality, functional links and simplicity in the definition and elements hierarchy. There are two useful applications of the TRM. Firstly it may be used as a tool for the analysis of information models, to elucidate connections and parallels. Secondly, it may be used as a “construction kit” to build new paradigms and/or applications in information management. The TRM may be used to provide a substrate for building diverse systems, such as adaptive hypertext, schemaless database, query languages, hyperlink models and workflow management systems. It is, however, highly generalised and is by no means limited to these purposes

    The Mechanical Properties of Carbon Fibre With Glass Fibre Hybrid Reinforced Plastics

    Get PDF
    Merged with duplicate record 10026.1/2475 on 15.03.2017 by CS (TIS)Fibre composite hybrid materials are generally plastics reinforced with two different fibre species. The combination of these three materials (in this thesis they are carbon fibres, glass fibres and polyester resin) allows a balance to be achieved between the properties of the two monofibre composites. Over the fifteen years since the introduction of continuous carbon fibre as a reinforcement, there has been considerable speculation about the "hybrid effect", a synergistic strengthening of reinforced plastics with two fibres when compared with the strength predicted from a weighted average from the component composites. A new equation is presented which predicts the extent of the hybrid effect. Experiments with a variety of carbon-glass hybrids were undertaken to examine the validity of the theory and the effect of the degree of inter-mixing of the fibres. The classification and quantification of the hybrid microstructures was examined with a view to crosscorrelation of the intimacy of mixing and the strength. Mechanical tests were monitored with acoustic emission counting and acoustic emission amplitude distribution equipment. Some specimens were subjected to one thermal cycle to liquid nitrogen temperature prior to testing. Fracture surfaces were examined in the scanning electron microscope. Numerical analysis by finite element methods was attempted. A constant strain triangular element was used initially, but in the later analyses the PAFEC anisotropic isoparametric quadrilateral elements were used. The system was adapted so that a \Ir singularity could be modelled, and post processor software was written to allow nodal averaging of the stresses and the presentation of this data graphically as stress contour maps
    corecore