140 research outputs found

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    On Languages Accepted by P/T Systems Composed of joins

    Full text link
    Recently, some studies linked the computational power of abstract computing systems based on multiset rewriting to models of Petri nets and the computation power of these nets to their topology. In turn, the computational power of these abstract computing devices can be understood by just looking at their topology, that is, information flow. Here we continue this line of research introducing J languages and proving that they can be accepted by place/transition systems whose underlying net is composed only of joins. Moreover, we investigate how J languages relate to other families of formal languages. In particular, we show that every J language can be accepted by a log n space-bounded non-deterministic Turing machine with a one-way read-only input. We also show that every J language has a semilinear Parikh map and that J languages and context-free languages (CFLs) are incomparable

    Cost implication analysis of concrete and Masonry waste in construction project

    Get PDF
    Concrete and masonry waste are the main types of waste typically generated at a construction project. There is a lack of studies in the country regarding the cost implication of managing these types of construction waste To address this need in Malaysia, the study is carried out to measure the disposal cost of concrete and masonry waste. The study was carried out by a site visit method using an indirect measurement approach to quantify the quantity of waste generated at the project. Based on the recorded number of trips for waste collection, the total expenditure to dispose the waste were derived in three construction stages. Data was collected four times a week for the period July 2014 to July 2015. The total waste generated at the study site was 762.51 m3 and the cost incurred for the 187 truck trips required to dispose the waste generated from the project site to the nearby landfill was RM22,440.00. The findings will be useful to both researchers and policy makers concerned with construction waste

    Reductions of Hidden Information Sources

    Full text link
    In all but special circumstances, measurements of time-dependent processes reflect internal structures and correlations only indirectly. Building predictive models of such hidden information sources requires discovering, in some way, the internal states and mechanisms. Unfortunately, there are often many possible models that are observationally equivalent. Here we show that the situation is not as arbitrary as one would think. We show that generators of hidden stochastic processes can be reduced to a minimal form and compare this reduced representation to that provided by computational mechanics--the epsilon-machine. On the way to developing deeper, measure-theoretic foundations for the latter, we introduce a new two-step reduction process. The first step (internal-event reduction) produces the smallest observationally equivalent sigma-algebra and the second (internal-state reduction) removes sigma-algebra components that are redundant for optimal prediction. For several classes of stochastic dynamical systems these reductions produce representations that are equivalent to epsilon-machines.Comment: 12 pages, 4 figures; 30 citations; Updates at http://www.santafe.edu/~cm

    Spectral Simplicity of Apparent Complexity, Part I: The Nondiagonalizable Metadynamics of Prediction

    Full text link
    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question---correlation, predictability, predictive cost, observer synchronization, and the like---induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II, to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.Comment: 24 pages, 3 figures, 4 tables; current version always at http://csc.ucdavis.edu/~cmg/compmech/pubs/sdscpt1.ht

    The PMIP4 contribution to CMIP6 – Part 1: overview and over-arching analysis plan

    Get PDF
    This paper is the first of a series of four GMD papers on the PMIP4-CMIP6 experiments. Part 2 (Otto-Bliesner et al., 2017) gives details about the two PMIP4-CMIP6 interglacial experiments, Part 3 (Jungclaus et al., 2017) about the last millennium experiment, and Part 4 (Kageyama et al., 2017) about the Last Glacial Maximum experiment. The mid-Pliocene Warm Period experiment is part of the Pliocene Model Intercomparison Project (PlioMIP) – Phase 2, detailed in Haywood et al. (2016). The goal of the Paleoclimate Modelling Intercomparison Project (PMIP) is to understand the response of the climate system to different climate forcings for documented climatic states very different from the present and historical climates. Through comparison with observations of the environmental impact of these climate changes, or with climate reconstructions based on physical, chemical, or biological records, PMIP also addresses the issue of how well state-of-the-art numerical models simulate climate change. Climate models are usually developed using the present and historical climates as references, but climate projections show that future climates will lie well outside these conditions. Palaeoclimates very different from these reference states therefore provide stringent tests for state-of-the-art models and a way to assess whether their sensitivity to forcings is compatible with palaeoclimatic evidence. Simulations of five different periods have been designed to address the objectives of the sixth phase of the Coupled Model Intercomparison Project (CMIP6): the millennium prior to the industrial epoch (CMIP6 name: past1000); the mid-Holocene, 6000 years ago (midHolocene); the Last Glacial Maximum, 21 000 years ago (lgm); the Last Interglacial, 127 000 years ago (lig127k); and the mid-Pliocene Warm Period, 3.2 million years ago (midPliocene-eoi400). These climatic periods are well documented by palaeoclimatic and palaeoenvironmental records, with climate and environmental changes relevant for the study and projection of future climate changes. This paper describes the motivation for the choice of these periods and the design of the numerical experiments and database requests, with a focus on their novel features compared to the experiments performed in previous phases of PMIP and CMIP. It also outlines the analysis plan that takes advantage of the comparisons of the results across periods and across CMIP6 in collaboration with other MIPs

    Small grid embeddings of 3-polytopes

    Full text link
    We introduce an algorithm that embeds a given 3-connected planar graph as a convex 3-polytope with integer coordinates. The size of the coordinates is bounded by O(27.55n)=O(188n)O(2^{7.55n})=O(188^{n}). If the graph contains a triangle we can bound the integer coordinates by O(24.82n)O(2^{4.82n}). If the graph contains a quadrilateral we can bound the integer coordinates by O(25.46n)O(2^{5.46n}). The crucial part of the algorithm is to find a convex plane embedding whose edges can be weighted such that the sum of the weighted edges, seen as vectors, cancel at every point. It is well known that this can be guaranteed for the interior vertices by applying a technique of Tutte. We show how to extend Tutte's ideas to construct a plane embedding where the weighted vector sums cancel also on the vertices of the boundary face

    Present state of global wetland extent and wetland methane modelling: conclusions from a model inter-comparison project (WETCHIMP)

    Get PDF
    Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH<sub>4</sub>). Increased wetland CH<sub>4</sub> emissions could act as a positive feedback to future warming. The Wetland and Wetland CH<sub>4</sub> Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH<sub>4</sub> emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO<sub>2</sub>) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO<sub>2</sub> concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. <br><br> Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH<sub>4</sub> emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH<sub>4</sub> emissions, but large variation between the models remains. For annual global CH<sub>4</sub> emissions, the models vary by ±40% of the all-model mean (190 Tg CH<sub>4</sub> yr<sup>−1</sup>). Second, all models show a strong positive response to increased atmospheric CO<sub>2</sub> concentrations (857 ppm) in both CH<sub>4</sub> emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH<sub>4</sub> fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH<sub>4</sub> fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH<sub>4</sub> emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH<sub>4</sub> emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH<sub>4</sub> emission models, even after uncertainties in wetland areas are accounted for

    Multi vegetation model evaluation of the Green Sahara climate regime

    Get PDF
    During the Quaternary, the Sahara desert was periodically colonized by vegetation, likely because of orbitally induced rainfall increases. However, the estimated hydrological change is not reproduced in climate model simulations, undermining confidence in projections of future rainfall. We evaluated the relationship between the qualitative information on past vegetation coverage and climate for the mid-Holocene using three different dynamic vegetation models. Compared with two available vegetation reconstructions, the models require 500–800 mm of rainfall over 20°–25°N, which is significantly larger than inferred from pollen but largely in agreement with more recent leaf wax biomarker reconstructions. The magnitude of the response also suggests that required rainfall regime of the early to middle Holocene is far from being correctly represented in general circulation models. However, intermodel differences related to moisture stress parameterizations, biases in simulated present-day vegetation, and uncertainties about paleosoil distributions introduce uncertainties, and these are also relevant to Earth system model simulations of African humid periods

    Understanding the glacial methane cycle.

    Get PDF
    Atmospheric methane (CH4) varied with climate during the Quaternary, rising from a concentration of 375 p.p.b.v. during the last glacial maximum (LGM) 21,000 years ago, to 680 p.p.b.v. at the beginning of the industrial revolution. However, the causes of this increase remain unclear; proposed hypotheses rely on fluctuations in either the magnitude of CH4 sources or CH4 atmospheric lifetime, or both. Here we use an Earth System model to provide a comprehensive assessment of these competing hypotheses, including estimates of uncertainty. We show that in this model, the global LGM CH4 source was reduced by 28-46%, and the lifetime increased by 2-8%, with a best-estimate LGM CH4 concentration of 463-480 p.p.b.v. Simulating the observed LGM concentration requires a 46-49% reduction in sources, indicating that we cannot reconcile the observed amplitude. This highlights the need for better understanding of the effects of low CO2 and cooler climate on wetlands and other natural CH4 sources
    • …
    corecore