12 research outputs found

    Reasoning with Very Expressive Fuzzy Description Logics

    Full text link
    It is widely recognized today that the management of imprecision and vagueness will yield more intelligent and realistic knowledge-based applications. Description Logics (DLs) are a family of knowledge representation languages that have gained considerable attention the last decade, mainly due to their decidability and the existence of empirically high performance of reasoning algorithms. In this paper, we extend the well known fuzzy ALC DL to the fuzzy SHIN DL, which extends the fuzzy ALC DL with transitive role axioms (S), inverse roles (I), role hierarchies (H) and number restrictions (N). We illustrate why transitive role axioms are difficult to handle in the presence of fuzzy interpretations and how to handle them properly. Then we extend these results by adding role hierarchies and finally number restrictions. The main contributions of the paper are the decidability proof of the fuzzy DL languages fuzzy-SI and fuzzy-SHIN, as well as decision procedures for the knowledge base satisfiability problem of the fuzzy-SI and fuzzy-SHIN

    Feature interaction in composed systems. Proceedings. ECOOP 2001 Workshop #08 in association with the 15th European Conference on Object-Oriented Programming, Budapest, Hungary, June 18-22, 2001

    Get PDF
    Feature interaction is nothing new and not limited to computer science. The problem of undesirable feature interaction (feature interaction problem) has already been investigated in the telecommunication domain. Our goal is the investigation of feature interaction in componet-based systems beyond telecommunication. This Technical Report embraces all position papers accepted at the ECOOP 2001 workshop no. 08 on "Feature Interaction in Composed Systems". The workshop was held on June 18, 2001 at Budapest, Hungary

    Ontology verbalization in agglutinating Bantu languages: a study of Runyankore and its generalizability

    Get PDF
    Natural Language Generation (NLG) systems have been developed to generate text in multiple domains, including personalized patient information. However, their application is limited in Africa because they generate text in English, yet indigenous languages are still predominantly spoken throughout the continent, especially in rural areas. The existing healthcare NLG systems cannot be reused for Bantu languages due to the complex grammatical structure, nor can the generated text be used in machine translation systems for Bantu languages because they are computationally under-resourced. This research aimed to verbalize ontologies in agglutinating Bantu languages. We had four research objectives: (1) noun pluralization and verb conjugation in Runyankore; (2) Runyankore verbalization patterns for the selected description logic constructors; (3) combining the pluralization, conjugation, and verbalization components to form a Runyankore grammar engine; and (4) generalizing the Runyankore and isiZulu approaches to ontology verbalization to other agglutinating Bantu languages. We used an approach that combines morphology with syntax and semantics to develop a noun pluralizer for Runyankore, and used Context-Free Grammars (CFGs) for verb conjugation. We developed verbalization algorithms for eight constructors in a description logic. We then combined these components into a grammar engine developed as a Protégé5X plugin. The investigation into generalizability used the bootstrap approach, and investigated bootstrapping for languages in the same language zone (intra-zone bootstrappability) and languages across language zones (inter-zone bootstrappability). We obtained verbalization patterns for Luganda and isiXhosa, in the same zones as Runyankore and isiZulu respectively, and chiShona, Kikuyu, and Kinyarwanda from different zones, and used the bootstrap metric that we developed to identify the most efficient source—target bootstrap pair. By regrouping Meinhof’s noun class system we were able to eliminate non-determinism during computation, and this led to the development of a generic noun pluralizer. We also showed that CFGs can conjugate verbs in the five additional languages. Finally, we proposed the architecture for an API that could be used to generate text in agglutinating Bantu languages. Our research provides a method for surface realization for an under-resourced and grammatically complex family of languages, Bantu languages. We leave the development of a complete NLG system based on the Runyankore grammar engine and of the API as areas for future work

    Temporalised Description Logics for Monitoring Partially Observable Events

    Get PDF
    Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar. Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen. Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms

    A general cognitive framework for context-aware systems: extensions and applications for high level information fusion approaches

    Get PDF
    Mención Internacional en el título de doctorContext-aware systems aims at the development of computational systems that process data acquired from different datasources and adapt their behaviour in order to provide the 'right' information, at the 'right' time, in the 'right' place, in the 'right' way to the 'right' person (Fischer, 2012). Traditionally computational research has tried to answer these needs by means of low-level algorithms. In the last years the combination of numeric and symbolic approaches has offered the opportunity to create systems to deal with these issues. However, although the performance of algorithms and the quality of the data directly provided by computers and devices has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This dissertation proposes a set of extensions and applications focused on a cognitive framework for the implementation of context-aware systems based on a general model inspired by the Information Fusion paradigm. This model is stepped in several abstraction levels from low-level raw data to high level scene interpretation whose structure is determined by a set of ontologies. Each ontology level provides a skeleton that includes general concepts and relations to describe entities and their connections. This structure has been designed to promote extensibility and modularity, and might be refined to apply this model in specific domains. This framework combines a priori context knowledge represented with ontologies with real data coming from sensors to support logic-based high-level interpretation of the current situation and to automatically generate feedback recommendations to adjust data acquisition procedures. This work advocates for the introduction of general purpose cognitive layers in order to obtain a closer representation to the human cognition, generate additional knowledge and improve the high-level interpretation. Extensibility and adaptability of the basic ontology levels is demonstrated with the introduction of these traverse semantic layers which are able to be present and represent information at several granularity levels of knowledge using a common formalism. Context-based system must be able to reason about uncertainty. However the reasoning associated to ontologies has been limited to classical description logic mechanisms. This research also tackle the problem of reasoning under uncertainty circumstances through a logic-based paradigm for abductive reasoning: the Belief-Argumentation System. The main contribution of this dissertation is the adaptation of the general architecture and the theoretical proposals to several context-aware application areas such as Ambient Intelligence, Social Signal Processing and surveillance systems. The implementation of prototypes and examples for these areas are explained along this dissertation to progressively illustrate the improvements and extensions in the framework. To initially depict the general model, its components and the basic reasoning mechanisms a video-based Ambient Intelligence application is presented. The advantages and features of the framework extensions through traverse cognitive layers are demonstrated in a Social Signal Processing case for the elaboration of automatic market researches. Finally, the functioning of the system under uncertainty circumstances is illustrated with several examples to support decision makers in the detection of potential threats in common harbor scenarios.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López.- Secretario: Ángel Arroyo.- Vocal: Nayat Sánchez P

    Generating mock skeletons for lightweight Web service testing : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Manawatū New Zealand

    Get PDF
    Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, continued access to dependent services during testing may be restrained, making adequate testing a significant and non-trivial engineering challenge. The concept of Service Virtualisation is gaining popularity for testing such applications in isolation. It is a practise to simulate the behaviour of dependent services by synthesising responses using semantic models inferred from recorded traffic. Replacing services with their respective mocks is, therefore, useful to address their absence and move on application testing. In reality, however, it is unlikely that fully automated service virtualisation solutions can produce highly accurate proxies. Therefore, we recommend using service virtualisation to infer some attributes of HTTP service responses. We further acknowledge that engineers often want to fine-tune this. This requires algorithms to produce readily interpretable and customisable results. We assume that if service virtualisation is based on simple logical rules, engineers would have the potential to understand and customise rules. In this regard, Symbolic Machine Learning approaches can be investigated because of the high provenance of their results. Accordingly, this thesis examines the appropriateness of symbolic machine learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. We consider four commonly used symbolic techniques: the C4.5 decision tree algorithm, the RIPPER and PART rule learners, and the OCEL description logic learning algorithm. The experiments are performed employing network traffic datasets extracted from a few different successful, large-scale HTTP services. The experimental design further focuses on the generation of reproducible results. The chosen algorithms demonstrate the suitability of training highly accurate and human-readable semantic models for predicting the key aspects of HTTP service responses, such as the status and response headers. Having human-readable logics would make interpretation of the response properties simpler. These mock skeletons can then be easily customised to create mocks that can generate service responses suitable for testing

    Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic

    Get PDF
    This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist’s B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL , in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design
    corecore