155,265 research outputs found

    Distilling Abstract Machines (Long Version)

    Full text link
    It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between big-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines.Comment: 63 page

    Abstract Machines for Open Call-by-Value

    Get PDF
    International audienceThe theory of the call-by-value λ-calculus relies on weak evaluation and closed terms, that are natural hypotheses in the study of programming languages. To model proof assistants, however, strong evaluation and open terms are required. Open call-by-value is the intermediate setting of weak evaluation with (possibly) open terms, on top of which Grégoire and Leroy designed one of the abstract machines of Coq. This paper provides a theory of abstract machines for the fireball calculus, the simplest presentation of open call-by-value. The literature contains machines that are either simple but inefficient, as they have an exponential overhead, or efficient but heavy, as they rely on a labeling of environments and a technical optimization. We introduce a machine that is simple and efficient: it does not use labels and it implements the fireball calculus within a bilinear overhead. Moreover, we provide a new fine understanding of how different optimizations impact on the complexity of the overhead, and evidence that the time cost model we work with is minimal

    Universal Intelligence: A Definition of Machine Intelligence

    Full text link
    A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.Comment: 50 gentle page

    On the enumeration of closures and environments with an application to random generation

    Get PDF
    Environments and closures are two of the main ingredients of evaluation in lambda-calculus. A closure is a pair consisting of a lambda-term and an environment, whereas an environment is a list of lambda-terms assigned to free variables. In this paper we investigate some dynamic aspects of evaluation in lambda-calculus considering the quantitative, combinatorial properties of environments and closures. Focusing on two classes of environments and closures, namely the so-called plain and closed ones, we consider the problem of their asymptotic counting and effective random generation. We provide an asymptotic approximation of the number of both plain environments and closures of size nn. Using the associated generating functions, we construct effective samplers for both classes of combinatorial structures. Finally, we discuss the related problem of asymptotic counting and random generation of closed environemnts and closures

    A Strong Distillery

    Get PDF
    Abstract machines for the strong evaluation of lambda-terms (that is, under abstractions) are a mostly neglected topic, despite their use in the implementation of proof assistants and higher-order logic programming languages. This paper introduces a machine for the simplest form of strong evaluation, leftmost-outermost (call-by-name) evaluation to normal form, proving it correct, complete, and bounding its overhead. Such a machine, deemed Strong Milner Abstract Machine, is a variant of the KAM computing normal forms and using just one global environment. Its properties are studied via a special form of decoding, called a distillation, into the Linear Substitution Calculus, neatly reformulating the machine as a standard micro-step strategy for explicit substitutions, namely linear leftmost-outermost reduction, i.e., the extension to normal form of linear head reduction. Additionally, the overhead of the machine is shown to be linear both in the number of steps and in the size of the initial term, validating its design. The study highlights two distinguished features of strong machines, namely backtracking phases and their interactions with abstractions and environments.Comment: Accepted at APLAS 201

    Computation Environments, An Interactive Semantics for Turing Machines (which P is not equal to NP considering it)

    Full text link
    To scrutinize notions of computation and time complexity, we introduce and formally define an interactive model for computation that we call it the \emph{computation environment}. A computation environment consists of two main parts: i) a universal processor and ii) a computist who uses the computability power of the universal processor to perform effective procedures. The notion of computation finds it meaning, for the computist, through his \underline{interaction} with the universal processor. We are interested in those computation environments which can be considered as alternative for the real computation environment that the human being is its computist. These computation environments must have two properties: 1- being physically plausible, and 2- being enough powerful. Based on Copeland' criteria for effective procedures, we define what a \emph{physically plausible} computation environment is. We construct two \emph{physically plausible} and \emph{enough powerful} computation environments: 1- the Turing computation environment, denoted by ETE_T, and 2- a persistently evolutionary computation environment, denoted by EeE_e, which persistently evolve in the course of executing the computations. We prove that the equality of complexity classes P\mathrm{P} and NP\mathrm{NP} in the computation environment EeE_e conflicts with the \underline{free will} of the computist. We provide an axiomatic system T\mathcal{T} for Turing computability and prove that ignoring just one of the axiom of T\mathcal{T}, it would not be possible to derive P=NP\mathrm{P=NP} from the rest of axioms. We prove that the computist who lives inside the environment ETE_T, can never be confident that whether he lives in a static environment or a persistently evolutionary one.Comment: 33 pages, interactive computation, P vs N

    Approximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results

    Full text link
    We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.Comment: 28 pages, 7 figures and 2 table

    Modeling, Simulation and Emulation of Intelligent Domotic Environments

    Get PDF
    Intelligent Domotic Environments are a promising approach, based on semantic models and commercially off-the-shelf domotic technologies, to realize new intelligent buildings, but such complexity requires innovative design methodologies and tools for ensuring correctness. Suitable simulation and emulation approaches and tools must be adopted to allow designers to experiment with their ideas and to incrementally verify designed policies in a scenario where the environment is partly emulated and partly composed of real devices. This paper describes a framework, which exploits UML2.0 state diagrams for automatic generation of device simulators from ontology-based descriptions of domotic environments. The DogSim simulator may simulate a complete building automation system in software, or may be integrated in the Dog Gateway, allowing partial simulation of virtual devices alongside with real devices. Experiments on a real home show that the approach is feasible and can easily address both simulation and emulation requirement
    corecore