109 research outputs found

    Selectional Restriction Extraction for Frame-Based Knowledge Graph Augmentation

    Get PDF
    The Semantic Web is an ambitious project aimed at creating a global, machine-readable web of data, to enable intelligent agents to access and reason over this data. Ontologies are a key component of the Semantic Web, as they provide a formal description of the concepts and relationships in a particular domain. Exploiting the expressiveness of knowledge graphs together with a more logically sound ontological schema can be crucial to represent consistent knowledge and inferring new relations over the data. In other words, constraining the entities and predicates of knowledge graphs leads to improved semantics. The same benefits can be found for restrictions over linguistic resources, which are knowledge graphs used to represent natural language. More specifically, it is possible to specify constraints on the arguments that can be associated with a given frame, based on their semantic roles (selectional restrictions). However, most of the linguistic resources define very general restrictions because they must be able to represent different domains. Hence, the main research question tackled by this thesis is whether the use of domain-specific selectional restrictions is useful for ontology augmentation, ontology definition and neuro-symbolic tasks on knowledge graphs. To this end, we have developed a tool to empirically extract selectional restrictions and their probabilities. The obtained constraints are represented in OWL-Star and subsequently mapped into OWL: we show that the mapping is information preserving and invertible if certain conditions hold. The OWL ontologies are inserted inside Framester, an open lexical-semantic resource for the English language, resulting in an improved and augmented language resource hub. The use of selectional restrictions is also tested for ontology documentation and neuro-symbolic tasks, showing how they can be exploited to provide meaningful results

    Semantic interpretation of events in lifelogging

    Get PDF
    The topic of this thesis is lifelogging, the automatic, passive recording of a person’s daily activities and in particular, on performing a semantic analysis and enrichment of lifelogged data. Our work centers on visual lifelogged data, such as taken from wearable cameras. Such wearable cameras generate an archive of a person’s day taken from a first-person viewpoint but one of the problems with this is the sheer volume of information that can be generated. In order to make this potentially very large volume of information more manageable, our analysis of this data is based on segmenting each day’s lifelog data into discrete and non-overlapping events corresponding to activities in the wearer’s day. To manage lifelog data at an event level, we define a set of concepts using an ontology which is appropriate to the wearer, applying automatic detection of concepts to these events and then semantically enriching each of the detected lifelog events making them an index into the events. Once this enrichment is complete we can use the lifelog to support semantic search for everyday media management, as a memory aid, or as part of medical analysis on the activities of daily living (ADL), and so on. In the thesis, we address the problem of how to select the concepts to be used for indexing events and we propose a semantic, density- based algorithm to cope with concept selection issues for lifelogging. We then apply activity detection to classify everyday activities by employing the selected concepts as high-level semantic features. Finally, the activity is modeled by multi-context representations and enriched by Semantic Web technologies. The thesis includes an experimental evaluation using real data from users and shows the performance of our algorithms in capturing the semantics of everyday concepts and their efficacy in activity recognition and semantic enrichment

    Towards generalizable neuro-symbolic reasoners

    Get PDF
    Doctor of PhilosophyDepartment of Computer ScienceMajor Professor Not ListedSymbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of --not necessarily easily obtained-- data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of trainable artificial neural networks. Over the course of this dissertation, we provide a brief summary of our recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners. More specifically, We designed a novel way of conducting neuro-symbolic through pointing to the input elements. More importantly we showed that the proposed approach is generalizable across new domain and vocabulary demonstrating symbol-invariant zero-shot reasoning capability. Furthermore, We have demonstrated that a deep learning architecture based on memory networks and pre-embedding normalization is capable of learning how to perform deductive reason over previously unseen RDF KGs with high accuracy. We are applying these models on Resource Description Framework (RDF), first-order logic, and the description logic EL+ respectively. Throughout this dissertation we will discuss strengths and limitations of these models particularly in term of accuracy, scalability, transferability, and generalizabiliy. Based on our experimental results, pointer networks perform remarkably well across multiple reasoning tasks while outperforming the previously reported state of the art by a significant margin. We observe that the Pointer Networks preserve their performance even when challenged with knowledge graphs of the domain/vocabulary it has never encountered before. To our knowledge, this work is the first attempt to reveal the impressive power of pointer networks for conducting deductive reasoning. Similarly, we show that memory networks can be trained to perform deductive RDFS reasoning with high precision and recall. The trained memory network's capabilities in fact transfer to previously unseen knowledge bases. Finally will talk about possible modifications to enhance desirable capabilities. Altogether, these research topics, resulted in a methodology for symbol-invariant neuro-symbolic reasoning

    Practical reasoning for defeasable description logics.

    Get PDF
    Doctor of Philosophy in Mathematics, Statistics and Computer Science. University of KwaZulu-Natal, Durban 2016.Description Logics (DLs) are a family of logic-based languages for formalising ontologies. They have useful computational properties allowing the development of automated reasoning engines to infer implicit knowledge from ontologies. However, classical DLs do not tolerate exceptions to speci ed knowledge. This led to the prominent research area of nonmonotonic or defeasible reasoning for DLs, where most techniques were adapted from seminal works for propositional and rst-order logic. Despite the topic's attention in the literature, there remains no consensus on what \sensible" defeasible reasoning means for DLs. Furthermore, there are solid foundations for several approaches and yet no serious implementations and practical tools. In this thesis we address the aforementioned issues in a broad sense. We identify the preferential approach, by Kraus, Lehmann and Magidor (KLM) in propositional logic, as a suitable abstract framework for de ning and studying the precepts of sensible defeasible reasoning. We give a generalisation of KLM's precepts, and their arguments motivating them, to the DL case. We also provide several preferential algorithms for defeasible entailment in DLs; evaluate these algorithms, and the main alternatives in the literature, against the agreed upon precepts; extensively test the performance of these algorithms; and ultimately consolidate our implementation in a software tool called Defeasible-Inference Platform (DIP). We found some useful entailment regimes within the preferential context that satisfy all the KLM properties, and some that have scalable performance in real world ontologies even without extensive optimisation

    Semantic-guided predictive modeling and relational learning within industrial knowledge graphs

    Get PDF
    The ubiquitous availability of data in today’s manufacturing environments, mainly driven by the extended usage of software and built-in sensing capabilities in automation systems, enables companies to embrace more advanced predictive modeling and analysis in order to optimize processes and usage of equipment. While the potential insight gained from such analysis is high, it often remains untapped, since integration and analysis of data silos from different production domains requires high manual effort and is therefore not economic. Addressing these challenges, digital representations of production equipment, so-called digital twins, have emerged leading the way to semantic interoperability across systems in different domains. From a data modeling point of view, digital twins can be seen as industrial knowledge graphs, which are used as semantic backbone of manufacturing software systems and data analytics. Due to the prevalent historically grown and scattered manufacturing software system landscape that is comprising of numerous proprietary information models, data sources are highly heterogeneous. Therefore, there is an increasing need for semi-automatic support in data modeling, enabling end-user engineers to model their domain and maintain a unified semantic knowledge graph across the company. Once the data modeling and integration is done, further challenges arise, since there has been little research on how knowledge graphs can contribute to the simplification and abstraction of statistical analysis and predictive modeling, especially in manufacturing. In this thesis, new approaches for modeling and maintaining industrial knowledge graphs with focus on the application of statistical models are presented. First, concerning data modeling, we discuss requirements from several existing standard information models and analytic use cases in the manufacturing and automation system domains and derive a fragment of the OWL 2 language that is expressive enough to cover the required semantics for a broad range of use cases. The prototypical implementation enables domain end-users, i.e. engineers, to extend the basis ontology model with intuitive semantics. Furthermore it supports efficient reasoning and constraint checking via translation to rule-based representations. Based on these models, we propose an architecture for the end-user facilitated application of statistical models using ontological concepts and ontology-based data access paradigms. In addition to that we present an approach for domain knowledge-driven preparation of predictive models in terms of feature selection and show how schema-level reasoning in the OWL 2 language can be employed for this task within knowledge graphs of industrial automation systems. A production cycle time prediction model in an example application scenario serves as a proof of concept and demonstrates that axiomatized domain knowledge about features can give competitive performance compared to purely data-driven ones. In the case of high-dimensional data with small sample size, we show that graph kernels of domain ontologies can provide additional information on the degree of variable dependence. Furthermore, a special application of feature selection in graph-structured data is presented and we develop a method that allows to incorporate domain constraints derived from meta-paths in knowledge graphs in a branch-and-bound pattern enumeration algorithm. Lastly, we discuss maintenance of facts in large-scale industrial knowledge graphs focused on latent variable models for the automated population and completion of missing facts. State-of-the art approaches can not deal with time-series data in form of events that naturally occur in industrial applications. Therefore we present an extension of learning knowledge graph embeddings in conjunction with data in form of event logs. Finally, we design several use case scenarios of missing information and evaluate our embedding approach on data coming from a real-world factory environment. We draw the conclusion that industrial knowledge graphs are a powerful tool that can be used by end-users in the manufacturing domain for data modeling and model validation. They are especially suitable in terms of the facilitated application of statistical models in conjunction with background domain knowledge by providing information about features upfront. Furthermore, relational learning approaches showed great potential to semi-automatically infer missing facts and provide recommendations to production operators on how to keep stored facts in synch with the real world

    An Ontology Centric Architecture For Mediating Interactions In Semantic Web-Based E-Commerce Environments

    Get PDF
    Information freely generated, widely distributed and openly interpreted is a rich source of creative energy in the digital age that we live in. As we move further into this irrevocable relationship with self-growing and actively proliferating information spaces, we are also finding ourselves overwhelmed, disheartened and powerless in the presence of so much information. We are at a point where, without domain familiarity or expert guidance, sifting through the copious volumes of information to find relevance quickly turns into a mundane task often requiring enormous patience. The realization of accomplishment soon turns into a matter of extensive cognitive load, serendipity or just plain luck. This dissertation describes a theoretical framework to analyze user interactions based on mental representations in a medium where the nature of the problem-solving task emphasizes the interaction between internal task representation and the external problem domain. The framework is established by relating to work in behavioral science, sociology, cognitive science and knowledge engineering, particularly Herbert Simon’s (1957; 1989) notion of satisficing on bounded rationality and Schön’s (1983) reflective model. Mental representations mediate situated actions in our constrained digital environment and provide the opportunity for completing a task. Since assistive aids to guide situated actions reduce complexity in the task environment (Vessey 1991; Pirolli et al. 1999), the framework is used as the foundation for developing mediating structures to express the internal, external and mental representations. Interaction aids superimposed on mediating structures that model thought and action will help to guide the “perpetual novice” (Borgman 1996) through the vast digital information spaces by orchestrating better cognitive fit between the task environment and the task solution. This dissertation presents an ontology centric architecture for mediating interactions is presented in a semantic web based e-commerce environment. The Design Science approach is applied for this purpose. The potential of the framework is illustrated as a functional model by using it to model the hierarchy of tasks in a consumer decision-making process as it applies in an e-commerce setting. Ontologies are used to express the perceptual operations on the external task environment, the intuitive operations on the internal task representation, and the constraint satisfaction and situated actions conforming to reasoning from the cognitive fit. It is maintained that actions themselves cannot be enforced, but when the meaning from mental imagery and the task environment are brought into coordination, it leads to situated actions that change the present situation into one closer to what is desired. To test the usability of the ontologies we use the Web Ontology Language (OWL) to express the semantics of the three representations. We also use OWL to validate the knowledge representations and to make rule-based logical inferences on the ontological semantics. An e-commerce application was also developed to show how effective guidance can be provided by constructing semantically rich target pages from the knowledge manifested in the ontologies

    Autonomic Approach based on Semantics and Checkpointing for IoT System Management

    Get PDF
    Le résumé en français n'a pas été communiqué par l'auteur.Le résumé en anglais n'a pas été communiqué par l'auteur
    corecore