205,292 research outputs found

    Predictive intelligence to the edge through approximate collaborative context reasoning

    Get PDF
    We focus on Internet of Things (IoT) environments where a network of sensing and computing devices are responsible to locally process contextual data, reason and collaboratively infer the appearance of a specific phenomenon (event). Pushing processing and knowledge inference to the edge of the IoT network allows the complexity of the event reasoning process to be distributed into many manageable pieces and to be physically located at the source of the contextual information. This enables a huge amount of rich data streams to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized Cloud system. We propose a lightweight, energy-efficient, distributed, adaptive, multiple-context perspective event reasoning model under uncertainty on each IoT device (sensor/actuator). Each device senses and processes context data and infers events based on different local context perspectives: (i) expert knowledge on event representation, (ii) outliers inference, and (iii) deviation from locally predicted context. Such novel approximate reasoning paradigm is achieved through a contextualized, collaborative belief-driven clustering process, where clusters of devices are formed according to their belief on the presence of events. Our distributed and federated intelligence model efficiently identifies any localized abnormality on the contextual data in light of event reasoning through aggregating local degrees of belief, updates, and adjusts its knowledge to contextual data outliers and novelty detection. We provide comprehensive experimental and comparison assessment of our model over real contextual data with other localized and centralized event detection models and show the benefits stemmed from its adoption by achieving up to three orders of magnitude less energy consumption and high quality of inference

    Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces

    Full text link
    Analogical Reasoning problems challenge both connectionist and symbolic AI systems as these entail a combination of background knowledge, reasoning and pattern recognition. While symbolic systems ingest explicit domain knowledge and perform deductive reasoning, they are sensitive to noise and require inputs be mapped to preset symbolic features. Connectionist systems on the other hand can directly ingest rich input spaces such as images, text or speech and recognize pattern even with noisy inputs. However, connectionist models struggle to include explicit domain knowledge for deductive reasoning. In this paper, we propose a framework that combines the pattern recognition abilities of neural networks with symbolic reasoning and background knowledge for solving a class of Analogical Reasoning problems where the set of attributes and possible relations across them are known apriori. We take inspiration from the 'neural algorithmic reasoning' approach [DeepMind 2020] and use problem-specific background knowledge by (i) learning a distributed representation based on a symbolic model of the problem (ii) training neural-network transformations reflective of the relations involved in the problem and finally (iii) training a neural network encoder from images to the distributed representation in (i). These three elements enable us to perform search-based reasoning using neural networks as elementary functions manipulating distributed representations. We test this on visual analogy problems in RAVENs Progressive Matrices, and achieve accuracy competitive with human performance and, in certain cases, superior to initial end-to-end neural-network based approaches. While recent neural models trained at scale yield SOTA, our novel neuro-symbolic reasoning approach is a promising direction for this problem, and is arguably more general, especially for problems where domain knowledge is available.Comment: 13 pages, 4 figures, Accepted at 16th International Workshop on Neural-Symbolic Learning and Reasoning as part of the 2nd International Joint Conference on Learning & Reasoning (IJCLR 2022

    Fuzzy reasoning spiking neural P system for fault diagnosis

    Get PDF
    Spiking neural P systems (SN P systems) have been well established as a novel class of distributed parallel computing models. Some features that SN P systems possess are attractive to fault diagnosis. However, handling fuzzy diagnosis knowledge and reasoning is required for many fault diagnosis applications. The lack of ability is a major problem of existing SN P systems when applying them to the fault diagnosis domain. Thus, we extend SN P systems by introducing some new ingredients (such as three types of neurons, fuzzy logic and new firing mechanism) and propose the fuzzy reasoning spiking neural P systems (FRSN P systems). The FRSN P systems are particularly suitable to model fuzzy production rules in a fuzzy diagnosis knowledge base and their reasoning process. Moreover, a parallel fuzzy reasoning algorithm based on FRSN P systems is developed according to neuron’s dynamic firing mechanism. Besides, a practical example of transformer fault diagnosis is used to demonstrate the feasibility and effectiveness of the proposed FRSN P systems in fault diagnosis problem.Ministerio de Ciencia e Innovación TIN2009–13192Junta de Andalucía P08-TIC-0420

    The global environmental agenda urgently needs a semantic web of knowledge

    Get PDF
    Progress in key social-ecological challenges of the global environmental agenda (e.g., climate change, biodiversity conservation, Sustainable Development Goals) is hampered by a lack of integration and synthesis of existing scientific evidence. Facing a fast-increasing volume of data, information remains compartmentalized to pre-defined scales and fields, rarely building its way up to collective knowledge. Today's distributed corpus of human intelligence, including the scientific publication system, cannot be exploited with the efficiency needed to meet current evidence synthesis challenges; computer-based intelligence could assist this task. Artificial Intelligence (AI)-based approaches underlain by semantics and machine reasoning offer a constructive way forward, but depend on greater understanding of these technologies by the science and policy communities and coordination of their use. By labelling web-based scientific information to become readable by both humans and computers, machines can search, organize, reuse, combine and synthesize information quickly and in novel ways. Modern open science infrastructure-i.e., public data and model repositories-is a useful starting point, but without shared semantics and common standards for machine actionable data and models, our collective ability to build, grow, and share a collective knowledge base will remain limited. The application of semantic and machine reasoning technologies by a broad community of scientists and decision makers will favour open synthesis to contribute and reuse knowledge and apply it toward decision making

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

    Full text link
    Rule-based models are attractive for various tasks because they inherently lead to interpretable and explainable decisions and can easily incorporate prior knowledge. However, such systems are difficult to apply to problems involving natural language, due to its linguistic variability. In contrast, neural models can cope very well with ambiguity by learning distributed representations of words and their composition from data, but lead to models that are difficult to interpret. In this paper, we describe a model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language. Specifically, we propose to use a Prolog prover which we extend to utilize a similarity function over pretrained sentence encoders. We fine-tune the representations for the similarity function via backpropagation. This leads to a system that can apply rule-based reasoning to natural language, and induce domain-specific rules from training data. We evaluate the proposed system on two different question answering tasks, showing that it outperforms two baselines -- BIDAF (Seo et al., 2016a) and FAST QA (Weissenborn et al., 2017b) on a subset of the WikiHop corpus and achieves competitive results on the MedHop data set (Welbl et al., 2017).Comment: ACL 201

    Identification of Design Principles

    Get PDF
    This report identifies those design principles for a (possibly new) query and transformation language for the Web supporting inference that are considered essential. Based upon these design principles an initial strawman is selected. Scenarios for querying the Semantic Web illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of the query language to be designed and implemented by the REWERSE working group I4

    A Generic Conceptual Model for Risk Analysis in a Multi-agent Based Collaborative Design Environment

    Get PDF
    Organised by: Cranfield UniversityThis paper presents a generic conceptual model of risk evaluation in order to manage the risk through related constraints and variables under a multi-agent collaborative design environment. Initially, a hierarchy constraint network is developed to mapping constraints and variables. Then, an effective approximation technique named Risk Assessment Matrix is adopted to evaluate risk level and rank priority after probability quantification and consequence validation. Additionally, an Intelligent Data based Reasoning Methodology is expanded to deal with risk mitigation by combining inductive learning methods and reasoning consistency algorithms with feasible solution strategies. Finally, two empirical studies were conducted to validate the effectiveness and feasibility of the conceptual model.Mori Seiki – The Machine Tool Compan

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    corecore