13,406 research outputs found

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    MuCIGREF: multiple computer-interpretable guideline representation and execution framework for managing multimobidity care

    Get PDF
    Clinical Practice Guidelines (CPGs) supply evidence-based recommendations to healthcare professionals (HCPs) for the care of patients. Their use in clinical practice has many benefits for patients, HCPs and treating medical centres, such as enhancing the quality of care, and reducing unwanted care variations. However, there are many challenges limiting their implementations. Initially, CPGs predominantly consider a specific disease, and only few of them refer to multimorbidity (i.e. the presence of two or more health conditions in an individual) and they are not able to adapt to dynamic changes in patient health conditions. The manual management of guideline recommendations are also challenging since recommendations may adversely interact with each other due to their competing targets and/or they can be duplicated when multiple of them are concurrently applied to a multimorbid patient. These may result in undesired outcomes such as severe disability, increased hospitalisation costs and many others. Formalisation of CPGs into a Computer Interpretable Guideline (CIG) format, allows the guidelines to be interpreted and processed by computer applications, such as Clinical Decision Support Systems (CDSS). This enables provision of automated support to manage the limitations of guidelines. This thesis introduces a new approach for the problem of combining multiple concurrently implemented CIGs and their interrelations to manage multimorbidity care. MuCIGREF (Multiple Computer-Interpretable Guideline Representation and Execution Framework), is proposed whose specific objectives are to present (1) a novel multiple CIG representation language, MuCRL, where a generic ontology is developed to represent knowledge elements of CPGs and their interrelations, and to create the multimorbidity related associations between them. A systematic literature review is conducted to discover CPG representation requirements and gaps in multimorbidity care management. The ontology is built based on the synthesis of well-known ontology building lifecycle methodologies. Afterwards, the ontology is transformed to a metamodel to support the CIG execution phase; and (2) a novel real-time multiple CIG execution engine, MuCEE, where CIG models are dynamically combined to generate consistent and personalised care plans for multimorbid patients. MuCEE involves three modules as (i) CIG acquisition module, transfers CIGs to the personal care plan based on the patient’s health conditions and to supply CIG version control; (ii) parallel CIG execution module, combines concurrently implemented multiple CIGs by performing concurrency management, time-based synchronisation (e.g., multi-activity merging), modification, and timebased optimisation of clinical activities; and (iii) CIG verification module, checks missing information, and inconsistencies to support CIG execution phases. Rulebased execution algorithms are presented for each module. Afterwards, a set of verification and validation analyses are performed involving real-world multimorbidity cases studies and comparative analyses with existing works. The results show that the proposed framework can combine multiple CIGs and dynamically merge, optimise and modify multiple clinical activities of them involving patient data. This framework can be used to support HCPs in a CDSS setting to generate unified and personalised care recommendations for multimorbid patients while merging multiple guideline actions and eliminating care duplications to maintain their safety and supplying optimised health resource management, which may improve operational and cost efficiency in real world-cases, as well

    Architecting specifications for test case generation

    Get PDF
    The Specification and Description Language (SDL) together with its associated tool sets can be used for the generation of Tree and Tabular Combined Notation (TTCN) test cases. Surprisingly, little documentation exists on the optimal way to specify systems so that they can best be used for the generation of tests. This paper, elaborates on the different tool supported approaches that can be taken for test case generation and highlights their advantages and disadvantages. A rule based SDL specification style is then presented that facilitates the automatic generation of tests

    Correctness of services and their composition

    Get PDF
    We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC.Wir studieren die Korrektheit von Services und Servicekompositionen und untersuchen, wie der Entwurf von korrekten Servicekompositionen systematisch unterstützt werden kann. Wir legen dabei den Fokus auf das Kommunikationsprotokoll der Services. Mithilfe von formalen Methoden tragen wir zu drei Szenarien von SOC bei

    Testing and test-driven development of conceptual schemas

    Get PDF
    The traditional focus for Information Systems (IS) quality assurance relies on the evaluation of its implementation. However, the quality of an IS can be largely determined in the first stages of its development. Several studies reveal that more than half the errors that occur during systems development are requirements errors. A requirements error is defined as a mismatch between requirements specification and stakeholders¿ needs and expectations. Conceptual modeling is an essential activity in requirements engineering aimed at developing the conceptual schema of an IS. The conceptual schema is the general knowledge that an IS needs to know in order to perform its functions. A conceptual schema specification has semantic quality when it is valid and complete. Validity means that the schema is correct (the knowledge it defines is true for the domain) and relevant (the knowledge it defines is necessary for the system). Completeness means that the conceptual schema includes all relevant knowledge. The validation of a conceptual schema pursues the detection of requirements errors in order to improve its semantic quality. Conceptual schema validation is still a critical challenge in requirements engineering. In this work we contribute to this challenge, taking into account that, since conceptual schemas of IS can be specified in executable artifacts, they can be tested. In this context, the main contributions of this Thesis are (1) an approach to test conceptual schemas of information systems, and (2) a novel method for the incremental development of conceptual schemas supported by continuous test-driven validation. As far as we know, this is the first work that proposes and implements an environment for automated testing of UML/OCL conceptual schemas, and the first work that explores the use of test-driven approaches in conceptual modeling. The testing of conceptual schemas may be an important and practical means for their validation. It allows checking correctness and completeness according to stakeholders¿ needs and expectations. Moreover, in conjunction with the automatic check of basic test adequacy criteria, we can also analyze the relevance of the elements defined in the schema. The testing environment we propose requires a specialized language for writing tests of conceptual schemas. We defined the Conceptual Schema Testing Language (CSTL), which may be used to specify automated tests of executable schemas specified in UML/OCL. We also describe a prototype implementation of a test processor that makes feasible the approach in practice. The conceptual schema testing approach supports test-last validation of conceptual schemas, but it also makes sense to test incomplete conceptual schemas while they are developed. This fact lays the groundwork of Test-Driven Conceptual Modeling (TDCM), which is our second main contribution. TDCM is a novel conceptual modeling method based on the main principles of Test-Driven Development (TDD), an extreme programming method in which a software system is developed in short iterations driven by tests. We have applied the method in several case studies, in the context of Design Research, which is the general research framework we adopted. Finally, we also describe an integration approach of TDCM into a broad set of software development methodologies, including the Unified Process development methodology, MDD-based approaches, storytest-driven agile methods and goal and scenario-oriented requirements engineering methods.Els enfocaments per assegurar la qualitat deis sistemes d'informació s'han basal tradicional m en! en l'avaluació de la seva implementació. No obstan! aix6, la qualitat d'un sis tema d'informació pot ser ampliament determinada en les primeres fases del seu desenvolupament. Diversos estudis indiquen que més de la meitat deis errors de software són errors de requisits . Un error de requisit es defineix com una desalineació entre l'especificació deis requisits i les necessitats i expectatives de les parts im plicades (stakeholders ). La modelització conceptual és una activitat essencial en l'enginyeria de requisits , l'objectiu de la qual és desenvolupar !'esquema conceptual d'un sistema d'informació. L'esquema conceptual és el coneixement general que un sistema d'informació requereix per tal de desenvolupar les seves funcions . Un esquema conceptual té qualitat semantica quan és va lid i complet. La valides a implica que !'esquema sigui correcte (el coneixement definit és cert peral domini) i rellevant (el coneixement definit és necessari peral sistema). La completes a significa que !'esquema conceptual inclou tot el coneixement rellevant. La validació de !'esquema conceptual té coma objectiu la detecció d'errors de requisits per tal de millorar la qualitat semantica. La validació d'esquemes conceptuals és un repte crític en l'enginyeria de requisits . Aquesta te si contribueix a aquest repte i es basa en el fet que els es quemes conceptuals de sistemes d'informació poden ser especificats en artefactes executables i, per tant, poden ser provats. Les principals contribucions de la te si són (1) un enfocament pera les pro ves d'esquemes conceptuals de sistemes d'informació, i (2) una metodología innovadora pel desenvolupament incremental d'esquemes conceptuals assistit per una validació continuada basada en proves . Les pro ves d'esquemes conceptuals poden ser una im portant i practica técnica pera la se va validació, jaque permeten provar la correctesa i la completesa d'acord ambles necessitats i expectatives de les parts interessades. En conjunció amb la comprovació d'un conjunt basic de criteris d'adequació de les proves, també podem analitzar la rellevancia deis elements definits a !'esquema. L'entorn de test proposat inclou un llenguatge especialitzat per escriure proves automatitzades d'esquemes conceptuals, anomenat Conceptual Schema Testing Language (CSTL). També hem descrit i implementa! a un prototip de processador de tes tos que fa possible l'aplicació de l'enfocament proposat a la practica. D'acord amb l'estat de l'art en validació d'esquemes conceptuals , aquest és el primer treball que proposa i implementa un entorn pel testing automatitzat d'esquemes conceptuals definits en UML!OCL. L'enfocament de proves d'esquemes conceptuals permet dura terme la validació d'esquemes existents , pero també té sentit provar es quemes conceptuals incomplets m entre estant sent desenvolupats. Aquest fet és la base de la metodología Test-Driven Conceptual Modeling (TDCM), que és la segona contribució principal. El TDCM és una metodología de modelització conceptual basada en principis basics del Test-Driven Development (TDD), un métode de programació en el qual un sistema software és desenvolupat en petites iteracions guiades per proves. També hem aplicat el métode en diversos casos d'estudi en el context de la metodología de recerca Design Science Research. Finalment, hem proposat enfocaments d'integració del TDCM en diverses metodologies de desenvolupament de software

    Correctness of services and their composition

    Get PDF
    We study correctness of services and their composition and investigate how the design of correct service compositions can be systematically supported. We thereby focus on the communication protocol of the service and approach these questions using formal methods and make contributions to three scenarios of SOC.Wir studieren die Korrektheit von Services und Servicekompositionen und untersuchen, wie der Entwurf von korrekten Servicekompositionen systematisch unterstützt werden kann. Wir legen dabei den Fokus auf das Kommunikationsprotokoll der Services. Mithilfe von formalen Methoden tragen wir zu drei Szenarien von SOC bei

    On Enabling Integrated Process Compliance with Semantic Constraints in Process Management Systems

    Get PDF
    Key to broad use of process management systems (PrMS) in practice is their ability to foster and ease the implementation, execution, monitoring, and adaptation of business processes while still being able to ensure robust and error-free process enactment. To meet these demands a variety of mechanisms has been developed to prevent errors at the structural level (e.g., deadlocks). In many application domains, however, processes often have to comply with business level rules and policies (i.e., semantic constraints) as well. Hence, to ensure error-free executions at the semantic level, PrMS need certain control mechanisms for validating and ensuring the compliance with semantic constraints. In this paper, we discuss fundamental requirements for a comprehensive support of semantic constraints in PrMS. Moreover, we provide a survey on existing approaches and discuss to what extent they are able to meet the requirements and which challenges still have to be tackled. In order to tackle the particular challenge of providing integrated compliance support over the process lifecycle, we introduce the SeaFlows framework. The framework introduces a behavioural level view on processes which serves a conceptual process representation for constraint specification approaches. Further, it provides general compliance criteria for static compliance validation but also for dealing with process changes. Altogether, the SeaFlows framework can serve as formal basis for realizing integrated support of semantic constraints in PrMS

    Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository

    Full text link
    Machine learning qualifies computers to assimilate with data, without being solely programmed [1, 2]. Machine learning can be classified as supervised and unsupervised learning. In supervised learning, computers learn an objective that portrays an input to an output hinged on training input-output pairs [3]. Most efficient and widely used supervised learning algorithms are K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor (LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this paper is to implement these elegant learning algorithms on eleven different datasets from the UCI machine learning repository to observe the variation of accuracies for each of the algorithms on all datasets. Analyzing the accuracy of the algorithms will give us a brief idea about the relationship of the machine learning algorithms and the data dimensionality. All the algorithms are developed in Matlab. Upon such accuracy observation, the comparison can be built among KNN, SVM, LMNN, and ENN regarding their performances on each dataset.Comment: To be published in the 4th IEEE International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT 2018

    Formal Analysis of Network Protocols

    Get PDF
    Today’s Internet is becoming increasingly complex and fragile. Current performance centric techniques on network analysis and runtime verification have became inadequate in the development of robust networks. To cope with these challenges there is a growing interest in the use of formal analysis techniques to reason about network protocol correctness throughout the network development cycle. This talk surveys recent work on the use of formal analysis techniques to aid in design, implementation, and analysis of network protocols. We first present a general framework that covers a majority of existing formal analysis techniques on both the control and routing planes of networks, and present a classification and taxonomy of techniques according to the proposed framework. Using four representative case studies (Metarouting, rcc, axiomatic formulation, and Alloy based analysis), we discuss various aspects of formal network analysis, including formal specification, formal verification, and system validation. Their strengths and limitations are evaluated and compared in detail
    corecore