11 research outputs found

    Web ontology reasoning with logic databases [online]

    Get PDF

    An integrated environment for computer-aided control engineering

    Get PDF
    This thesis considers the construction of a system to support the total design cycle for control systems. This encompasses modelling of the plant to be controlled, specification of the final objectives or performance, design of the required controllers and their implementation in hardware and software. The main contributions of this thesis are : its development of a model for CAD support for controller design, evaluation of the software engineering aspects of CAD development, the development of an architecture to support a control system design through its full design cycle and the implementation of this architecture in a prototype package. The research undertakes a review of general design theory to develop a model for the computeraided controller design process. Current state-of-the-art packages are evaluated against this model, highlighting their shortcomings. Current research to overcome these shortcomings is then reviewed. The software engineering aspects to the design of a CAD package are developed. The characteristics of CAD software are defined. An evaluation of Fortran, Pascal, C, C++, Ada , Lisp and Prologue as suitable languages to implement a CAD package is made. Based on this, Ada was selected as the most suitable, mainly because of its encapsulation of many of the modern software engineering concepts. The architecture for a computer-aided control engineering (CACE) package is designed using an object-oriented design method. This architecture defines the requirements for a complete CACE package including control-oriented data structures and schematic capture of plant models. The details of a prototype package using Ada are given to provide detailed knowledge in the problems of implementing this architecture. Examples using this prototype package are given to demonstrate the potential of a complete implementation of the architecture

    The 1991 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications

    First CLIPS Conference Proceedings, volume 1

    Get PDF
    The first Conference of C Language Production Systems (CLIPS) hosted by the NASA-Lyndon B. Johnson Space Center in August 1990 is presented. Articles included engineering applications, intelligent tutors and training, intelligent software engineering, automated knowledge acquisition, network applications, verification and validation, enhancements to CLIPS, space shuttle quality control/diagnosis applications, space shuttle and real-time applications, and medical, biological, and agricultural applications

    Semantic description and matching of services for pervasive environments

    Get PDF
    With the evolution of the World Wide Web and the advancement of the electronic world, the diversity of available services is increasing rapidly.This raises new demands for the efficient discovery and location of heterogeneous services and resources in dynamically changing environments. The traditional approaches for service discovery such as UDDI, Salutation, SLP etc. characterise the services by using predefined service categories and fixed attribute value pairs and the matching techniques in these approaches are limited to syntactic comparisons based on attributes or interfaces. More recently with the popularity of Semantic Web technologies, there has been an increased interest in the application of reasoning mechanisms to support discovery and matching. These approaches provide important directions in overcoming the limitations present in the traditional approaches to service discovery. However, these still have limitations and have overlooked issues that need to be addressed; particularly these approaches do not have an effective ranking criterion to facilitate the ordering of the potential matches, according to their suitability to satisfy the request under concern. This thesis presents a semantic matching framework to facilitate effective discovery of device based services in pervasive environments. This offers a ranking mechanism that will order the available services in the order of their suitability and also considers priorities placed on individual requirements in a request during the matching process. The proposed approach has been implemented in a pervasive scenario for matching device-based services. The Device Ontology which has been developed as part of this research, has been used to describe the devices and their services. The retrieval effectiveness of this semantic matching approach has been formally investigated through the use of human participant studies and the experimental results have indicated that the results correlate well with human perception. The performance of the solution has also been evaluated, to explore the effects of employing reasoning mechanisms on the efficiency of the matching process. Specifically the scalability of the solution has been investigated with respect to the request size and the number of advertisements involved in matching.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Fuzzy Sets, Fuzzy Logic and Their Applications

    Get PDF
    The present book contains 20 articles collected from amongst the 53 total submitted manuscripts for the Special Issue “Fuzzy Sets, Fuzzy Loigic and Their Applications” of the MDPI journal Mathematics. The articles, which appear in the book in the series in which they were accepted, published in Volumes 7 (2019) and 8 (2020) of the journal, cover a wide range of topics connected to the theory and applications of fuzzy systems and their extensions and generalizations. This range includes, among others, management of the uncertainty in a fuzzy environment; fuzzy assessment methods of human-machine performance; fuzzy graphs; fuzzy topological and convergence spaces; bipolar fuzzy relations; type-2 fuzzy; and intuitionistic, interval-valued, complex, picture, and Pythagorean fuzzy sets, soft sets and algebras, etc. The applications presented are oriented to finance, fuzzy analytic hierarchy, green supply chain industries, smart health practice, and hotel selection. This wide range of topics makes the book interesting for all those working in the wider area of Fuzzy sets and systems and of fuzzy logic and for those who have the proper mathematical background who wish to become familiar with recent advances in fuzzy mathematics, which has entered to almost all sectors of human life and activity

    Représentation et combinaison d'informations incertaines : nouveaux résultats avec applications aux études de sûreté nucléaires

    Get PDF
    It often happens that the value of some parameters or variables of a system are imperfectly known, either because of the variability of the modelled phenomena, or because the availableinformation is imprecise or incomplete. Classical probability theory is usually used to treat these uncertainties. However, recent years have witnessed the appearance of arguments pointing to the conclusion that classical probabilities are inadequate to handle imprecise or incomplete information. Other frameworks have thus been proposed to address this problem: the three main are probability sets, random sets and possibility theory. There are many open questions concerning uncertainty treatment within these frameworks. More precisely, it is necessary to build bridges between these three frameworks to advance toward a unified handlingof uncertainty. Also, there is a need of practical methods to treat information, as using these framerowks can be computationally costly. In this work, we propose some answers to these two needs for a set of commonly encountered problems. In particular, we focus on the problems of:- Uncertainty representation- Fusion and evluation of multiple source information- Independence modellingThe aim being to give tools (both of theoretical and practical nature) to treat uncertainty. Some tools are then applied to some problems related to nuclear safety issues.Souvent, les valeurs de certains paramètres ou variables d'un système ne sont connues que de façon imparfaite, soit du fait de la variabilité des phénomènes physiques que l'on cherche à représenter,soit parce que l'information dont on dispose est imprécise, incomplète ou pas complètement fiable.Usuellement, cette incertitude est traitée par la théorie classique des probabilités. Cependant, ces dernières années ont vu apparaître des arguments indiquant que les probabilités classiques sont inadéquates lorsqu'il faut représenter l'imprécision présente dans l'information. Des cadres complémentaires aux probabilités classiques ont donc été proposés pour remédier à ce problème : il s'agit, principalement, des ensembles de probabilités, des ensembles aléatoires et des possibilités. Beaucoup de questions concernant le traitement des incertitudes dans ces trois cadres restent ouvertes. En particulier, il est nécessaire d'unifier ces approches et de comprendre les liens existants entre elles, et de proposer des méthodes de traitement permettant d'utiliser ces approches parfois cher en temps de calcul. Dans ce travail, nous nous proposons d'apporter des réponses à ces deux besoins pour une série de problème de traitement de l'incertain rencontré en analyse de sûreté. En particulier, nous nous concentrons sur les problèmes suivants :- Représentation des incertitudes- Fusion/évaluation de données venant de sources multiples- Modélisation de l'indépendanceL'objectif étant de fournir des outils, à la fois théoriques et pratiques, de traitement d'incertitude. Certains de ces outils sont ensuite appliqués à des problèmes rencontrés en sûreté nucléaire

    Bioinformatical approaches to ranking of anti-HIV combination therapies and planning of treatment schedules

    Get PDF
    The human immunodeficiency virus (HIV) pandemic is one of the most serious health challenges humanity is facing today. Combination therapy comprising multiple antiretroviral drugs resulted in a dramatic decline in HIV-related mortality in the developed countries. However, the emergence of drug resistant HIV variants during treatment remains a problem for permanent treatment success and seriously hampers the composition of new active regimens. In this thesis we use statistical learning for developing novel methods that rank combination therapies according to their chance of achieving treatment success. These depend on information regarding the treatment composition, the viral genotype, features of viral evolution, and the patient's therapy history. Moreover, we investigate different definitions of response to antiretroviral therapy and their impact on prediction performance of our method. We address the problem of extending purely data-driven approaches to support novel drugs with little available data. In addition, we explore the prospect of prediction systems that are centered on the patient's treatment history instead of the viral genotype. We present a framework for rapidly simulating resistance development during combination therapy that will eventually allow application of combination therapies in the best order. Finally, we analyze surface proteins of HIV regarding their susceptibility to neutralizing antibodies with the aim of supporting HIV vaccine development.Die Humane Immundefizienz-Virus (HIV) Pandemie ist eine der schwerwiegendsten gesundheitlichen Herausforderungen weltweit. Kombinationstherapien bestehend aus mehreren Medikamenten führten in entwickelten Ländern zu einem drastischen Rückgang der HIV-bedingten Sterblichkeit. Die Entstehung von Arzneimittel-resistenten Varianten während der Behandlung stellt allerdings ein Problem für den anhaltenden Behandlungserfolg dar und erschwert die Zusammenstellung von neuen aktiven Kombinationen. In dieser Arbeit verwenden wir statistisches Lernen zur Entwicklung neuer Methoden, welche Kombinationstherapien bezüglich ihres erwarteten Behandlungserfolgs sortieren. Dabei nutzen wir Informationen über die Medikamente, das virale Erbgut, die Virus Evolution und die Therapiegeschichte des Patienten. Außerdem untersuchen wir unterschiedliche Definitionen für Therapieerfolg und ihre Auswirkungen auf die Güte unserer Modelle. Wir gehen das Problem der Erweiterung von daten-getriebenen Modellen bezüglich neuer Wirkstoffen an, und untersuchen weiterhin die Therapiegeschichte des Patienten als Ersatz für das virale Genom bei der Vorhersage. Wir stellen das Rahmenwerk für die schnelle Simulation von Resistenzentwicklung vor, welches schließlich erlaubt, die bestmögliche Reihenfolge von Kombinationstherapien zu suchen. Schließlich analysieren wir das HIV Oberflächenprotein im Hinblick auf seine Anfälligkeit für neutralisierende Antikörper mit dem Ziel die Impfstoff Entwicklung zu unterstützen

    Bioinformatical approaches to ranking of anti-HIV combination therapies and planning of treatment schedules

    Get PDF
    The human immunodeficiency virus (HIV) pandemic is one of the most serious health challenges humanity is facing today. Combination therapy comprising multiple antiretroviral drugs resulted in a dramatic decline in HIV-related mortality in the developed countries. However, the emergence of drug resistant HIV variants during treatment remains a problem for permanent treatment success and seriously hampers the composition of new active regimens. In this thesis we use statistical learning for developing novel methods that rank combination therapies according to their chance of achieving treatment success. These depend on information regarding the treatment composition, the viral genotype, features of viral evolution, and the patient's therapy history. Moreover, we investigate different definitions of response to antiretroviral therapy and their impact on prediction performance of our method. We address the problem of extending purely data-driven approaches to support novel drugs with little available data. In addition, we explore the prospect of prediction systems that are centered on the patient's treatment history instead of the viral genotype. We present a framework for rapidly simulating resistance development during combination therapy that will eventually allow application of combination therapies in the best order. Finally, we analyze surface proteins of HIV regarding their susceptibility to neutralizing antibodies with the aim of supporting HIV vaccine development.Die Humane Immundefizienz-Virus (HIV) Pandemie ist eine der schwerwiegendsten gesundheitlichen Herausforderungen weltweit. Kombinationstherapien bestehend aus mehreren Medikamenten führten in entwickelten Ländern zu einem drastischen Rückgang der HIV-bedingten Sterblichkeit. Die Entstehung von Arzneimittel-resistenten Varianten während der Behandlung stellt allerdings ein Problem für den anhaltenden Behandlungserfolg dar und erschwert die Zusammenstellung von neuen aktiven Kombinationen. In dieser Arbeit verwenden wir statistisches Lernen zur Entwicklung neuer Methoden, welche Kombinationstherapien bezüglich ihres erwarteten Behandlungserfolgs sortieren. Dabei nutzen wir Informationen über die Medikamente, das virale Erbgut, die Virus Evolution und die Therapiegeschichte des Patienten. Außerdem untersuchen wir unterschiedliche Definitionen für Therapieerfolg und ihre Auswirkungen auf die Güte unserer Modelle. Wir gehen das Problem der Erweiterung von daten-getriebenen Modellen bezüglich neuer Wirkstoffen an, und untersuchen weiterhin die Therapiegeschichte des Patienten als Ersatz für das virale Genom bei der Vorhersage. Wir stellen das Rahmenwerk für die schnelle Simulation von Resistenzentwicklung vor, welches schließlich erlaubt, die bestmögliche Reihenfolge von Kombinationstherapien zu suchen. Schließlich analysieren wir das HIV Oberflächenprotein im Hinblick auf seine Anfälligkeit für neutralisierende Antikörper mit dem Ziel die Impfstoff Entwicklung zu unterstützen
    corecore