2,691 research outputs found

    Inductive Logic Programming in Databases: from Datalog to DL+log

    Full text link
    In this paper we address an issue that has been brought to the attention of the database community with the advent of the Semantic Web, i.e. the issue of how ontologies (and semantics conveyed by them) can help solving typical database problems, through a better understanding of KR aspects related to databases. In particular, we investigate this issue from the ILP perspective by considering two database problems, (i) the definition of views and (ii) the definition of constraints, for a database whose schema is represented also by means of an ontology. Both can be reformulated as ILP problems and can benefit from the expressive and deductive power of the KR framework DL+log. We illustrate the application scenarios by means of examples. Keywords: Inductive Logic Programming, Relational Databases, Ontologies, Description Logics, Hybrid Knowledge Representation and Reasoning Systems. Note: To appear in Theory and Practice of Logic Programming (TPLP).Comment: 30 pages, 3 figures, 2 tables

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    Representational information: a new general notion and measure\ud of information

    Get PDF
    In what follows, we introduce the notion of representational information (information conveyed by sets of dimensionally defined objects about their superset of origin) as well as an\ud original deterministic mathematical framework for its analysis and measurement. The framework, based in part on categorical invariance theory [Vigo, 2009], unifies three key constructsof universal science – invariance, complexity, and information. From this unification we define the amount of information that a well-defined set of objects R carries about its finite superset of origin S, as the rate of change in the structural complexity of S (as determined by its degree of categorical invariance), whenever the objects in R are removed from the set S. The measure captures deterministically the significant role that context and category structure play in determining the relative quantity and quality of subjective information conveyed by particular objects in multi-object stimuli

    Cirquent calculus deepened

    Full text link
    Cirquent calculus is a new proof-theoretic and semantic framework, whose main distinguishing feature is being based on circuits, as opposed to the more traditional approaches that deal with tree-like objects such as formulas or sequents. Among its advantages are greater efficiency, flexibility and expressiveness. This paper presents a detailed elaboration of a deep-inference cirquent logic, which is naturally and inherently resource conscious. It shows that classical logic, both syntactically and semantically, is just a special, conservative fragment of this more general and, in a sense, more basic logic -- the logic of resources in the form of cirquent calculus. The reader will find various arguments in favor of switching to the new framework, such as arguments showing the insufficiency of the expressive power of linear logic or other formula-based approaches to developing resource logics, exponential improvements over the traditional approaches in both representational and proof complexities offered by cirquent calculus, and more. Among the main purposes of this paper is to provide an introductory-style starting point for what, as the author wishes to hope, might have a chance to become a new line of research in proof theory -- a proof theory based on circuits instead of formulas.Comment: Significant improvements over the previous version

    Constructing neural network models from brain data reveals representational transformations linked to adaptive behavior

    Get PDF
    The human ability to adaptively implement a wide variety of tasks is thought to emerge from the dynamic transformation of cognitive information. We hypothesized that these transformations are implemented via conjunctive activations in “conjunction hubs”—brain regions that selectively integrate sensory, cognitive, and motor activations. We used recent advances in using functional connectivity to map the flow of activity between brain regions to construct a task-performing neural network model from fMRI data during a cognitive control task. We verified the importance of conjunction hubs in cognitive computations by simulating neural activity flow over this empirically-estimated functional connectivity model. These empiricallyspecified simulations produced above-chance task performance (motor responses) by integrating sensory and task rule activations in conjunction hubs. These findings reveal the role of conjunction hubs in supporting flexible cognitive computations, while demonstrating the feasibility of using empirically-estimated neural network models to gain insight into cognitive computations in the human brain

    Contextualizing concepts using a mathematical generalization of the quantum formalism

    Get PDF
    We outline the rationale and preliminary results of using the State Context Property (SCOP) formalism, originally developed as a generalization of quantum mechanics, to describe the contextual manner in which concepts are evoked, used, and combined to generate meaning. The quantum formalism was developed to cope with problems arising in the description of (1) the measurement process, and (2) the generation of new states with new properties when particles become entangled. Similar problems arising with concepts motivated the formal treatment introduced here. Concepts are viewed not as fixed representations, but entities existing in states of potentiality that require interaction with a context---a stimulus or another concept---to `collapse' to observable form as an exemplar, prototype, or other (possibly imaginary) instance. The stimulus situation plays the role of the measurement in physics, acting as context that induces a change of the cognitive state from superposition state to collapsed state. The collapsed state is more likely to consist of a conjunction of concepts for associative than analytic thought because more stimulus or concept properties take part in the collapse. We provide two contextual measures of conceptual distance---one using collapse probabilities and the other weighted properties---and show how they can be applied to conjunctions using the pet fish problem
    corecore