72 research outputs found

    Dynamical systems via domains:Toward a unified foundation of symbolic and non-symbolic computation

    Get PDF
    Non-symbolic computation (as, e.g., in biological and artificial neural networks) is astonishingly good at learning and processing noisy real-world data. However, it lacks the kind of understanding we have of symbolic computation (as, e.g., specified by programming languages). Just like symbolic computation, also non-symbolic computation needs a semantics—or behavior description—to achieve structural understanding. Domain theory has provided this for symbolic computation, and this thesis is about extending it to non-symbolic computation. Symbolic and non-symbolic computation can be described in a unified framework as state-discrete and state-continuous dynamical systems, respectively. So we need a semantics for dynamical systems: assigning to a dynamical system a domain—i.e., a certain mathematical structure—describing the system’s behavior. In part 1 of the thesis, we provide this domain-theoretic semantics for the ‘symbolic’ state-discrete systems (i.e., labeled transition systems). And in part 2, we do this for the ‘non-symbolic’ state-continuous systems (known from ergodic theory). This is a proper semantics in that the constructions form functors (in the sense of category theory) and, once appropriately formulated, even adjunctions and, stronger yet, equivalences. In part 3, we explore how this semantics relates the two types of computation. It suggests that non-symbolic computation is the limit of symbolic computation (in the ‘profinite’ sense). Conversely, if the system’s behavior is fairly stable, it may be described as realizing symbolic computation (here the concepts of ergodicity and algorithmic randomness are useful). However, the underlying concept of stability is limited by a no-go result due to a novel interpretation of Fitch’s paradox. This also has implications for AI-safety and, more generally, suggests fruitful applications of philosophical tools in the non-symbolic computation of modern AI

    A CHR-based Implementation of Known Arc-Consistency

    Full text link
    In classical CLP(FD) systems, domains of variables are completely known at the beginning of the constraint propagation process. However, in systems interacting with an external environment, acquiring the whole domains of variables before the beginning of constraint propagation may cause waste of computation time, or even obsolescence of the acquired data at the time of use. For such cases, the Interactive Constraint Satisfaction Problem (ICSP) model has been proposed as an extension of the CSP model, to make it possible to start constraint propagation even when domains are not fully known, performing acquisition of domain elements only when necessary, and without the need for restarting the propagation after every acquisition. In this paper, we show how a solver for the two sorted CLP language, defined in previous work, to express ICSPs, has been implemented in the Constraint Handling Rules (CHR) language, a declarative language particularly suitable for high level implementation of constraint solvers.Comment: 22 pages, 2 figures, 1 table To appear in Theory and Practice of Logic Programming (TPLP

    Set Unification

    Full text link
    The unification problem in algebras capable of describing sets has been tackled, directly or indirectly, by many researchers and it finds important applications in various research areas--e.g., deductive databases, theorem proving, static analysis, rapid software prototyping. The various solutions proposed are spread across a large literature. In this paper we provide a uniform presentation of unification of sets, formalizing it at the level of set theory. We address the problem of deciding existence of solutions at an abstract level. This provides also the ability to classify different types of set unification problems. Unification algorithms are uniformly proposed to solve the unification problem in each of such classes. The algorithms presented are partly drawn from the literature--and properly revisited and analyzed--and partly novel proposals. In particular, we present a new goal-driven algorithm for general ACI1 unification and a new simpler algorithm for general (Ab)(Cl) unification.Comment: 58 pages, 9 figures, 1 table. To appear in Theory and Practice of Logic Programming (TPLP

    A new model construction by making a detour via intuitionistic theories II: Interpretability lower bound of Feferman's explicit mathematics T0

    Get PDF
    We partially solve a long-standing problem in the proof theory of explicit mathematics or the proof theory in general. Namely, we give a lower bound of Feferman’s system T0 of explicit mathematics (but only when formulated on classical logic) with a concrete interpretat ion of the subsystem Σ12-AC+ (BI) of second order arithmetic inside T0. Whereas a lower bound proof in the sense of proof-theoretic reducibility or of ordinalanalysis was already given in 80s, the lower bound in the sense of interpretability we give here is new. We apply the new interpretation method developed by the author and Zumbrunnen (2015), which can be seen as the third kind of model construction method for classical theories, after Cohen’s forcing and Krivine’s classical realizability. It gives us an interpretation between classical theories, by composing interpretations between intuitionistic theories

    Theory of partial-order programming

    Get PDF
    This paper shows the use of partial-order program clauses and lattice domains for declarative programming. This paradigm is particularly useful for expressing concise solutions to problems from graph theory, program analysis, and database querying. These applications are characterized by a need to solve circular constraints and perform aggregate operations, a capability that is very clearly and efficiently provided by partial-order clauses. We present a novel approach to their declarative and operational semantics, as well as the correctness of the operational semantics. The declarative semantics is model-theoretic in nature, but the least model for any function is not the classical intersection of all models, but the greatest lower bound/least upper bound of the respective terms defined for this function in the different models. The operational semantics combines top-down goal reduction with memo-tables. In the partial-order programming framework, however, memoization is primarily needed in order to detect circular circular function calls. In general we need more than simple memoization when functions are defined circularly in terms of one another through monotonic functions. In such cases, we accumulate a set of functional-constraint and solve them by general fixed-point-finding procedure. In order to prove the correctness of memoization, a straightforward induction on the length of the derivation will not suffice because of the presence of the memo-table. However, since the entries in the table grow monotonically, we identify a suitable table invariant that captures the correctness of the derivation. The partial-order programming paradigm has been implemented and all examples shown in this paper have been tested using this implementation

    On abduction and answer generation through constrained resolution

    Get PDF
    Recently, extensions of constrained logic programming and constrained resolution for theorem proving have been introduced, that consider constraints, which are interpreted under an open world assumption. We discuss relationships between applications of these approaches for query answering in knowledge base systems on the one hand and abduction-based hypothetical reasoning on the other hand. We show both that constrained resolution can be used as an operationalization of (some limited form of) abduction and that abduction is the logical status of an answer generation process through constrained resolution, ie., it is an abductive but not a deductive form of reasoning
    • …
    corecore