10,332 research outputs found

    Unifying Functional Interpretations: Past and Future

    Full text link
    This article surveys work done in the last six years on the unification of various functional interpretations including G\"odel's dialectica interpretation, its Diller-Nahm variant, Kreisel modified realizability, Stein's family of functional interpretations, functional interpretations "with truth", and bounded functional interpretations. Our goal in the present paper is twofold: (1) to look back and single out the main lessons learnt so far, and (2) to look forward and list several open questions and possible directions for further research.Comment: 18 page

    The Challenge of Unifying Semantic and Syntactic Inference Restrictions

    No full text
    While syntactic inference restrictions don't play an important role for SAT, they are an essential reasoning technique for more expressive logics, such as first-order logic, or fragments thereof. In particular, they can result in short proofs or model representations. On the other hand, semantically guided inference systems enjoy important properties, such as the generation of solely non-redundant clauses. I discuss to what extend the two paradigms may be unifiable

    The Structure of Matter in Spacetime from the Substructure of Time

    Full text link
    The nature of the change in perspective that accompanies the proposal of a unified physical theory deriving from the single dimension of time is elaborated. On expressing a temporal interval in a multi-dimensional form, via a direct arithmetic decomposition, both the geometric structure of 4-dimensional spacetime and the physical structure of matter in spacetime can be derived from the substructure of time. While reviewing this construction, here we emphasise how the new conceptual picture differs from the more typical viewpoint in theoretical physics of accounting for the properties of matter by first postulating entities on top of a given spacetime background or by geometrically augmenting 4-dimensional spacetime itself. With reference to historical and philosophical sources we argue that the proposed perspective, centred on the possible arithmetic forms of time, provides an account for how the mathematical structures of the theory can relate directly to the physical structures of the empirical world.Comment: 32 pages, 2 figure

    Generalised Proper Time as a Unifying Basis for Models with Two Right-Handed Neutrinos

    Full text link
    Models with two right-handed neutrinos are able to accommodate solar and atmospheric neutrino oscillation observations as well as a mechanism for the baryon asymmetry of the universe. While economical in terms of the required new states beyond the Standard Model, given that there are three generations of the other leptons and quarks this raises the question concerning why only two right-handed neutrino states should exist. Here we develop from first principles a fundamental unification scheme based upon a direct generalisation and analysis of a simple proper time interval with a structure beyond that of local 4-dimensional spacetime and further augmenting that of models with extra spatial dimensions. This theory leads to properties of matter fields that resemble the Standard Model, with an intrinsic left-right asymmetry which is particularly marked for the neutrino sector. It will be shown how the theory can provide a foundation for the natural incorporation of two right-handed neutrinos and may in principle underlie firm predictions both in the neutrino sector and for other new physics beyond the Standard Model. While connecting with contemporary and future experiments the origins of the theory are motivated in a similar spirit as for the earliest unified field theories.Comment: 68 pages, 2 figure

    Optimization and NP_R-Completeness of Certain Fewnomials

    Full text link
    We give a high precision polynomial-time approximation scheme for the supremum of any honest n-variate (n+2)-nomial with a constant term, allowing real exponents as well as real coefficients. Our complexity bounds count field operations and inequality checks, and are polynomial in n and the logarithm of a certain condition number. For the special case of polynomials (i.e., integer exponents), the log of our condition number is quadratic in the sparse encoding. The best previous complexity bounds were exponential in the sparse encoding, even for n fixed. Along the way, we extend the theory of A-discriminants to real exponents and certain exponential sums, and find new and natural NP_R-complete problems.Comment: 9 pages, 7 figures (3 of them tiny). This is close to the final conference proceedings versio

    Computing Stable Models of Normal Logic Programs Without Grounding

    Full text link
    We present a method for computing stable models of normal logic programs, i.e., logic programs extended with negation, in the presence of predicates with arbitrary terms. Such programs need not have a finite grounding, so traditional methods do not apply. Our method relies on the use of a non-Herbrand universe, as well as coinduction, constructive negation and a number of other novel techniques. Using our method, a normal logic program with predicates can be executed directly under the stable model semantics without requiring it to be grounded either before or during execution and without requiring that its variables range over a finite domain. As a result, our method is quite general and supports the use of terms as arguments, including lists and complex data structures. A prototype implementation and non-trivial applications have been developed to demonstrate the feasibility of our method

    Calculational semantics: deriving programming theories from equations by functional predicate calculus

    Get PDF

    Heisenberg and the Levels of Reality

    Full text link
    We first analyze the transdisciplinary model of Reality and its key-concept of "Levels of Reality". We then compare this model with the one elaborated by Werner Heisenberg in 1942.Comment: 12 pages, Reference added to the journal in which the paer is publishe

    Approximation and Estimation for High-Dimensional Deep Learning Networks

    Full text link
    It has been experimentally observed in recent years that multi-layer artificial neural networks have a surprising ability to generalize, even when trained with far more parameters than observations. Is there a theoretical basis for this? The best available bounds on their metric entropy and associated complexity measures are essentially linear in the number of parameters, which is inadequate to explain this phenomenon. Here we examine the statistical risk (mean squared predictive error) of multi-layer networks with 1\ell^1-type controls on their parameters and with ramp activation functions (also called lower-rectified linear units). In this setting, the risk is shown to be upper bounded by [(L3logd)/n]1/2[(L^3 \log d)/n]^{1/2}, where dd is the input dimension to each layer, LL is the number of layers, and nn is the sample size. In this way, the input dimension can be much larger than the sample size and the estimator can still be accurate, provided the target function has such 1\ell^1 controls and that the sample size is at least moderately large compared to L3logdL^3\log d. The heart of the analysis is the development of a sampling strategy that demonstrates the accuracy of a sparse covering of deep ramp networks. Lower bounds show that the identified risk is close to being optimal

    Automatic Differentiation using Constraint Handling Rules in Prolog

    Full text link
    Automatic differentiation is a technique which allows a programmer to define a numerical computation via compositions of a broad range of numeric and computational primitives and have the underlying system support the computation of partial derivatives of the result with respect to any of its inputs, without making any finite difference approximations, and without manipulating large symbolic expressions representing the computation. This note describes a novel approach to reverse mode automatic differentiation using constraint logic programmming, specifically, the constraint handling rules (CHR) library of SWI Prolog, resulting in a very small (50 lines of code) implementation. When applied to a differentiation-based implementation of the inside-outside algorithm for parameter learning in probabilistic grammars, the CHR based implementations outperformed two well-known frameworks for optimising differentiable functions, Theano and TensorFlow, by a large margin
    corecore