15,152 research outputs found

    A direct proof of the confluence of combinatory strong reduction

    Get PDF
    I give a proof of the confluence of combinatory strong reduction that does not use the one of lambda-calculus. I also give simple and direct proofs of a standardization theorem for this reduction and the strong normalization of simply typed terms.Comment: To appear in TC

    Equivalence of call-by-name and call-by-need for lambda-calculi with letrec

    Get PDF
    We develop a proof method to show that in a (deterministic) lambda calculus with letrec and equipped with contextual equivalence the call-by-name and the call-by-need evaluation are equivalent, and also that the unrestricted copy-operation is correct. Given a let-binding x = t, the copy-operation replaces an occurrence of the variable x by the expression t, regardless of the form of t. This gives an answer to unresolved problems in several papers, it adds a strong method to the tool set for reasoning about contextual equivalence in higher-order calculi with letrec, and it enables a class of transformations that can be used as optimizations. The method can be used in different kind of lambda calculi with cyclic sharing. Probably it can also be used in non-deterministic lambda calculi if the variable x is "deterministic", i.e., has no interference with non-deterministic executions. The main technical idea is to use a restricted variant of the infinitary lambda-calculus, whose objects are the expressions that are unrolled w.r.t. let, to define the infinite developments as a reduction calculus on the infinite trees and showing a standardization theorem

    An estimation for the lengths of reduction sequences of the λμρθ\lambda\mu\rho\theta-calculus

    Full text link
    Since it was realized that the Curry-Howard isomorphism can be extended to the case of classical logic as well, several calculi have appeared as candidates for the encodings of proofs in classical logic. One of the most extensively studied among them is the λμ\lambda\mu-calculus of Parigot. In this paper, based on the result of Xi presented for the λ\lambda-calculus Xi, we give an upper bound for the lengths of the reduction sequences in the λμ\lambda\mu-calculus extended with the ρ\rho- and θ\theta-rules. Surprisingly, our results show that the new terms and the new rules do not add to the computational complexity of the calculus despite the fact that μ\mu-abstraction is able to consume an unbounded number of arguments by virtue of the μ\mu-rule

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Are a set of microarrays independent of each other?

    Full text link
    Having observed an m×nm\times n matrix XX whose rows are possibly correlated, we wish to test the hypothesis that the columns are independent of each other. Our motivation comes from microarray studies, where the rows of XX record expression levels for mm different genes, often highly correlated, while the columns represent nn individual microarrays, presumably obtained independently. The presumption of independence underlies all the familiar permutation, cross-validation and bootstrap methods for microarray analysis, so it is important to know when independence fails. We develop nonparametric and normal-theory testing methods. The row and column correlations of XX interact with each other in a way that complicates test procedures, essentially by reducing the accuracy of the relevant estimators.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS236 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    The logic of interactive Turing reduction

    Full text link
    The paper gives a soundness and completeness proof for the implicative fragment of intuitionistic calculus with respect to the semantics of computability logic, which understands intuitionistic implication as interactive algorithmic reduction. This concept -- more precisely, the associated concept of reducibility -- is a generalization of Turing reducibility from the traditional, input/output sorts of problems to computational tasks of arbitrary degrees of interactivity. See http://www.cis.upenn.edu/~giorgi/cl.html for a comprehensive online source on computability logic

    Factor modeling for high-dimensional time series: Inference for the number of factors

    Get PDF
    This paper deals with the factor modeling for high-dimensional time series based on a dimension-reduction viewpoint. Under stationary settings, the inference is simple in the sense that both the number of factors and the factor loadings are estimated in terms of an eigenanalysis for a nonnegative definite matrix, and is therefore applicable when the dimension of time series is on the order of a few thousands. Asymptotic properties of the proposed method are investigated under two settings: (i) the sample size goes to infinity while the dimension of time series is fixed; and (ii) both the sample size and the dimension of time series go to infinity together. In particular, our estimators for zero-eigenvalues enjoy faster convergence (or slower divergence) rates, hence making the estimation for the number of factors easier. In particular, when the sample size and the dimension of time series go to infinity together, the estimators for the eigenvalues are no longer consistent. However, our estimator for the number of the factors, which is based on the ratios of the estimated eigenvalues, still works fine. Furthermore, this estimation shows the so-called "blessing of dimensionality" property in the sense that the performance of the estimation may improve when the dimension of time series increases. A two-step procedure is investigated when the factors are of different degrees of strength. Numerical illustration with both simulated and real data is also reported.Comment: Published in at http://dx.doi.org/10.1214/12-AOS970 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Finite Model Property for Intersection Types

    Full text link
    We show that the relational theory of intersection types known as BCD has the finite model property; that is, BCD is complete for its finite models. Our proof uses rewriting techniques which have as an immediate by-product the polynomial time decidability of the preorder <= (although this also follows from the so called beta soundness of BCD).Comment: In Proceedings ITRS 2014, arXiv:1503.0437
    corecore