34,200 research outputs found

    An Abstract Approach to Stratification in Linear Logic

    Full text link
    We study the notion of stratification, as used in subsystems of linear logic with low complexity bounds on the cut-elimination procedure (the so-called light logics), from an abstract point of view, introducing a logical system in which stratification is handled by a separate modality. This modality, which is a generalization of the paragraph modality of Girard's light linear logic, arises from a general categorical construction applicable to all models of linear logic. We thus learn that stratification may be formulated independently of exponential modalities; when it is forced to be connected to exponential modalities, it yields interesting complexity properties. In particular, from our analysis stem three alternative reformulations of Baillot and Mazza's linear logic by levels: one geometric, one interactive, and one semantic

    On paths-based criteria for polynomial time complexity in proof-nets

    Get PDF
    Girard's Light linear logic (LLL) characterized polynomial time in the proof-as-program paradigm with a bound on cut elimination. This logic relied on a stratification principle and a "one-door" principle which were generalized later respectively in the systems L^4 and L^3a. Each system was brought with its own complex proof of Ptime soundness. In this paper we propose a broad sufficient criterion for Ptime soundness for linear logic subsystems, based on the study of paths inside the proof-nets, which factorizes proofs of soundness of existing systems and may be used for future systems. As an additional gain, our bound stands for any reduction strategy whereas most bounds in the literature only stand for a particular strategy.Comment: Long version of a conference pape

    Safe Recursion on Notation into a Light Logic by Levels

    Get PDF
    We embed Safe Recursion on Notation (SRN) into Light Affine Logic by Levels (LALL), derived from the logic L4. LALL is an intuitionistic deductive system, with a polynomial time cut elimination strategy. The embedding allows to represent every term t of SRN as a family of proof nets |t|^l in LALL. Every proof net |t|^l in the family simulates t on arguments whose bit length is bounded by the integer l. The embedding is based on two crucial features. One is the recursive type in LALL that encodes Scott binary numerals, i.e. Scott words, as proof nets. Scott words represent the arguments of t in place of the more standard Church binary numerals. Also, the embedding exploits the "fuzzy" borders of paragraph boxes that LALL inherits from L4 to "freely" duplicate the arguments, especially the safe ones, of t. Finally, the type of |t|^l depends on the number of composition and recursion schemes used to define t, namely the structural complexity of t. Moreover, the size of |t|^l is a polynomial in l, whose degree depends on the structural complexity of t. So, this work makes closer both the predicative recursive theoretic principles SRN relies on, and the proof theoretic one, called /stratification/, at the base of Light Linear Logic

    Context Semantics, Linear Logic and Computational Complexity

    Full text link
    We show that context semantics can be fruitfully applied to the quantitative analysis of proof normalization in linear logic. In particular, context semantics lets us define the weight of a proof-net as a measure of its inherent complexity: it is both an upper bound to normalization time (modulo a polynomial overhead, independently on the reduction strategy) and a lower bound to the number of steps to normal form (for certain reduction strategies). Weights are then exploited in proving strong soundness theorems for various subsystems of linear logic, namely elementary linear logic, soft linear logic and light linear logic.Comment: 22 page

    Reduced functional measure of cardiovascular reserve predicts admission to critical care unit following kidney transplantation

    Get PDF
    Background: There is currently no effective preoperative assessment for patients undergoing kidney transplantation that is able to identify those at high perioperative risk requiring admission to critical care unit (CCU). We sought to determine if functional measures of cardiovascular reserve, in particular the anaerobic threshold (VO2AT) could identify these patients. Methods: Adult patients were assessed within 4 weeks prior to kidney transplantation in a University hospital with a 37-bed CCU, between April 2010 and June 2012. Cardiopulmonary exercise testing (CPET), echocardiography and arterial applanation tonometry were performed. Results: There were 70 participants (age 41.7614.5 years, 60% male, 91.4% living donor kidney recipients, 23.4% were desensitized). 14 patients (20%) required escalation of care from the ward to CCU following transplantation. Reduced anaerobic threshold (VO2AT) was the most significant predictor, independently (OR = 0.43; 95% CI 0.27–0.68; p,0.001) and in the multivariate logistic regression analysis (adjusted OR = 0.26; 95% CI 0.12–0.59; p = 0.001). The area under the receiveroperating- characteristic curve was 0.93, based on a risk prediction model that incorporated VO2AT, body mass index and desensitization status. Neither echocardiographic nor measures of aortic compliance were significantly associated with CCU admission. Conclusions: To our knowledge, this is the first prospective observational study to demonstrate the usefulness of CPET as a preoperative risk stratification tool for patients undergoing kidney transplantation. The study suggests that VO2AT has the potential to predict perioperative morbidity in kidney transplant recipients

    Cut Elimination for a Logic with Induction and Co-induction

    Full text link
    Proof search has been used to specify a wide range of computation systems. In order to build a framework for reasoning about such specifications, we make use of a sequent calculus involving induction and co-induction. These proof principles are based on a proof theoretic (rather than set-theoretic) notion of definition. Definitions are akin to logic programs, where the left and right rules for defined atoms allow one to view theories as "closed" or defining fixed points. The use of definitions and free equality makes it possible to reason intentionally about syntax. We add in a consistent way rules for pre and post fixed points, thus allowing the user to reason inductively and co-inductively about properties of computational system making full use of higher-order abstract syntax. Consistency is guaranteed via cut-elimination, where we give the first, to our knowledge, cut-elimination procedure in the presence of general inductive and co-inductive definitions.Comment: 42 pages, submitted to the Journal of Applied Logi
    corecore