1,432 research outputs found

    Applying spatial reasoning to topographical data with a grounded geographical ontology

    Get PDF
    Grounding an ontology upon geographical data has been pro- posed as a method of handling the vagueness in the domain more effectively. In order to do this, we require methods of reasoning about the spatial relations between the regions within the data. This stage can be computationally expensive, as we require information on the location of points in relation to each other. This paper illustrates how using knowledge about regions allows us to reduce the computation required in an efficient and easy to understand manner. Further, we show how this system can be implemented in co-ordination with segmented data to reason abou

    String-based Multi-adjoint Lattices for Tracing Fuzzy Logic Computations

    Get PDF
    Classically, most programming languages use in a predefined way thenotion of “string” as an standard data structure for a comfortable management of arbitrary sequences of characters. However, in this paper we assign a different role to this concept: here we are concerned with fuzzy logic programming, a somehow recent paradigm trying to introduce fuzzy logic into logic programming. In this setting, the mathematical concept of multi-adjoint lattice has been successfully exploited into the so-called Multi-adjoint Logic Programming approach, MALP in brief, for modeling flexible notions of truth-degrees beyond the simpler case of true and false. Our main goal points out not only our formal proof verifying that stringbased lattices accomplish with the so-called multi-adjoint property (as well as its Cartesian product with similar structures), but also its correspondence with interesting debugging tasks into the FLOPER system (from “Fuzzy LOgic Programming Environment for Research”) developed in our research group

    The PITA System: Tabling and Answer Subsumption for Reasoning under Uncertainty

    Full text link
    Many real world domains require the representation of a measure of uncertainty. The most common such representation is probability, and the combination of probability with logic programs has given rise to the field of Probabilistic Logic Programming (PLP), leading to languages such as the Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs), Problog, PRISM and others. These languages share a similar distribution semantics, and methods have been devised to translate programs between these languages. The complexity of computing the probability of queries to these general PLP programs is very high due to the need to combine the probabilities of explanations that may not be exclusive. As one alternative, the PRISM system reduces the complexity of query answering by restricting the form of programs it can evaluate. As an entirely different alternative, Possibilistic Logic Programs adopt a simpler metric of uncertainty than probability. Each of these approaches -- general PLP, restricted PLP, and Possibilistic Logic Programming -- can be useful in different domains depending on the form of uncertainty to be represented, on the form of programs needed to model problems, and on the scale of the problems to be solved. In this paper, we show how the PITA system, which originally supported the general PLP language of LPADs, can also efficiently support restricted PLP and Possibilistic Logic Programs. PITA relies on tabling with answer subsumption and consists of a transformation along with an API for library functions that interface with answer subsumption

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    CHR as grammar formalism. A first report

    Full text link
    Grammars written as Constraint Handling Rules (CHR) can be executed as efficient and robust bottom-up parsers that provide a straightforward, non-backtracking treatment of ambiguity. Abduction with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified for anaphora resolution, coordination and text interpretation.Comment: 12 pages. Presented at ERCIM Workshop on Constraints, Prague, Czech Republic, June 18-20, 200

    Analyzing Fuzzy Logic Computations with Fuzzy XPath

    Get PDF
    Implemented with a fuzzy logic language by using the FLOPER tool developed in our research group, we have recently designed a fuzzy dialect of the popular XPath language for the flexible manipulation of XML documents. In this paper we focus on the ability of Fuzzy XPath for exploring derivation trees generated by FLOPER once they are exported in XML format, which somehow serves as a debugging/analizing tool for discovering the set of fuzzy computed answers for a given goal, performing depth/breadth-first traversals of its associated derivation tree, finding non fully evaluated branches, etc., thus reinforcing the bi-lateral synergies between Fuzzy XPath and FLOPER

    Architectural Uncertainty Analysis for Access Control Scenarios in Industry 4.0

    Get PDF
    Industrie 4.0-Systeme zeichnen sich durch ihre hohe KomplexitĂ€t, KonnektivitĂ€t und ihren hohen Datenaustausch aus. Aufgrund dieser Eigenschaften ist es entscheidend, eine Vertraulichkeit der Daten sicher zu stellen. Ein hĂ€ufig verwendetes Verfahren zum Sicherstellen von Vertraulichkeit ist das Verwenden von Zugriffskontrolle. Basierend auf modellierter Softwarearchitektur, kann eine Zugriffskontrolle bereits wĂ€hrend der Entwurfszeit konzeptionell auf das System angewendet werden. Dies ermöglicht es, potentielle Vertraulichkeitsprobleme bereits frĂŒh zu identifizieren und bietet die Möglichkeit, die Auswirkungen von Was-wĂ€re-wenn-Szenarien auf die Vertraulichkeit zu analysieren, bevor entsprechende Änderungen umgesetzt werden. Ungewissheiten der Systemumgebung, die sich aus Unklarheiten in den frĂŒhen Phasen der Entwicklung oder der abstrakten Sicht des Softwarearchitekturmodells ergeben, können sich jedoch direkt auf bestehende Zugriffskontrollrichtlinien auswirken und zu einer reduzierten Vertraulichkeit fĂŒhren. Um dies abzuschwĂ€chen, ist es wichtig, Ungewissheiten zu identifizieren und zu behandeln. In dieser Arbeit stellen wir unseren Ansatz zum Umgang mit Ungewissheiten der Zugriffskontrolle wĂ€hrend der Entwurfszeit vor. Wir erstellen eine Charakterisierung von Ungewissheiten in der Zugriffskontrolle auf der Architekturebene, um ein besseres VerstĂ€ndnis ĂŒber die existierenden Arten von Ungewissheiten zu erhalten. Darauf basierend definieren wir ein Konzept des Vertrauens in die GĂŒltigkeit von Eigenschaften der Zugriffskontrolle. Dieses Konzept bietet die Möglichkeit mit Ungewissheiten umzugehen, die bereits in Publikationen zu Zugriffskontrollmodellen beschrieben wurden. Das Konzept des Vertrauens ist eine Zusammensetzung von Umgebungsfaktoren, die die GĂŒltigkeit von und folglich das Vertrauen in Zugriffskontrolleigenschaften beeinflussen. Um Umgebungsfaktoren zu kombinieren und so Vertrauenswerte von Zugriffskontrolleigenschaften zu erhalten, nutzen wir Fuzzy-Inferenzsysteme. Diese erhaltenen Vertrauenswerte werden von einem Analyseprozess mit in Betracht gezogen, um Probleme zu identifizieren, die aus einem Mangel an Vertrauen entstehen. Wir erweitern einen bestehenden Ansatz zur Analyse von Informationsfluss und Zugriffskontrolle zur Entwurfszeit, basierend auf Datenflussdiagrammen. Das Wissen, welches wir mit unserem Konzept des Vertrauens hinzufĂŒgen, soll Softwarearchitekten die Möglichkeit geben, die QualitĂ€t ihrer Modelle zu erhöhen und Anforderungen an die Zugriffskontrolle ihrer Systeme bereits in frĂŒhen Phasen der Softwareentwicklung, unter BerĂŒcksichtigung von Ungewissheiten zu verifizieren. Die Anwendbarkeit unseres Ansatzes evaluieren wir anhand der VerfĂŒgbarkeit der notwendigen Daten in verschiedenen Phasen der Softwareentwicklung, sowie des potenziellen Mehrwerts fĂŒr bestehende Systeme. Wir messen die Genauigkeit der Analyse beim Identifizieren von Problemen und die Skalierbarkeit hinsichtlich der AusfĂŒhrungszeit, wenn verschiedene Modellaspekte individuell vergrĂ¶ĂŸert werden
    • 

    corecore