64 research outputs found

    Grazing weakens N-addition effects on soil greenhouse gas emissions in a semi-arid grassland

    Get PDF
    Grazing and anthropogenic nitrogen (N) enrichment co-occur in most grassland ecosystems and may have substantial effects on production of soil greenhouse gases (GHGs). Although the individual effects of N addition and grazing on soil GHGs are well understood, their long-term interactive effects on grassland soil GHGs remain unclear. We conducted seven-year in situ measurement of three major GHGs in a long-term experiment comprising grazing (no, light, moderate, and heavy grazing intensity) and N-addition treatments (control, N addition: 10 g N m−2 year−1) in a semi-arid grassland, to determine the effects of N addition and grazing on GHGs. We found that moderate grazing reduced cumulative CO2 emissions by 10%–11% compared with no, light, and heavy grazing. Unusually, CH4 emissions from soils and N2O uptake were found in this semi-arid grassland. Soil CH4 uptake was markedly inhibited by moderate and heavy grazing. Relative to no grazing, grazing significantly reduced 60%–88% N2O uptake over seven years on average. Nitrogen addition alone increased cumulative CO2 emissions by 16% relative to control. An antagonistic effect between grazing and N addition was found on cumulative CO2 emissions, cumulative CH4 uptake, and global warming potential (GWP). Light grazing on this semi-arid grassland could offset 14% of the soil GHG emissions induced by N addition. Soil NO3 − -N was the most important factor controlling soil CO2 emissions and CH4 uptake, and soil pH was a major factor mediating soil N2O uptake or consumption. Our study highlights the importance that adjusting the grazing intensity of grassland is one of efficient strategies to mitigate GHGs emissions in the context of climate change

    Optimize Energy Efficiency of Quadrotors Via Arm Rotation

    No full text

    Study on the Basic Mechanical Properties and Discrete Element Method Simulation of Permeable Concrete

    No full text
    Permeable concrete pavement material has many voids and a good water permeability, which can reduce surface runoff and alleviate the problem of urban water logging. It also has the functions of acting as a supplementary source of groundwater, purifying water, bodies reducing the urban heat island effect, reducing road noise, and so on. It is an effective solution for urban infrastructures. However, at the same time, because it has a large number of pores, this also affects the strength of permeable concrete. The main factors affecting permeable concrete are particle size and the shape of the aggregate, the content of the cement paste and aggregate, the compaction degree of the mixture, and so on. In this study, the single-factor test method was used to study the effects of aggregate size, slurry-to-bone ratio and loose paving coefficient on the basic mechanical properties and permeability of permeable concrete. Here, the numerical model for permeable concrete is established by using the particle flow discrete element (Particle Flow Code (PFC)modeling method, and a numerical simulation test is carried out. It can be seen from the test results that the permeability coefficient of 50% 5–10 mm + 50% 10–15 mm mixed aggregate permeable concrete is slightly lower than that of 5–10 mm and 10–15 mm single-size aggregate, but has a higher compressive and splitting tensile strength. With the increase in paste-to-bone ratio, the permeability coefficient of permeable concrete decreases, and the compressive strength increases. The loose paving coefficient has a significant effect on the mechanics and permeability of permeable concrete with the increase in the loose paving coefficient, the water permeability decreases and the compressive strength increases. The numerical simulation results show that under the condition that the loose paving coefficient is 1.10 and the slurry-to-bone ratio is 0.5, compared with the experimental results, the error of the numerical simulation results of the compression test is less than 3%. The reliability of the simulation is verified. The discrete element modeling method in this study can be used to simulate the shape of the aggregate in permeable concrete, and the numerical model can effectively simulate the crack development and failure form of permeable concrete in compression tests

    Resource management for improving soft-error and lifetime reliability of real-time MPSoCs

    No full text
    Multiprocessor system-on-chip (MPSoC) has been widely used in many real-time embedded systems where both soft-error reliability (SER) and lifetime reliability (LTR) are key concerns. Many existing works have investigated them, but they focus either on handling one of the two reliability concerns or on improving one type of reliability under the constraint of the other. These techniques are thus not applicable to maximize SER and LTR simultaneously, which is highly desired in some real-world applications. In this paper, we study the joint optimization of SER and LTR for real-time MPSoCs. We propose a novel static task scheduling algorithm to simultaneously maximize SER and LTR for real-time homogeneous MPSoC systems under the constraints of deadline, energy budget, and task precedence. Specifically, we develop a new solution representation scheme and two evolutionary operators that are closely integrated with two popular multiobjective evolutionary optimization frameworks, namely NSGAII and SPEA2. Extensive experimental results on standard benchmarks and synthetic applications show the efficacy of our scheme. More specifically, our scheme can achieve significantly better solutions (i.e., LTR-SER tradeoff fronts) with remarkably higher hypervolume and can be dozens or even hundreds of times faster than the state-of-the-art algorithms. The results also demonstrate that our scheme can be applied to heterogeneous MPSoC systems and is effective in improving reliability for heterogeneous MPSoC systems

    Minimizing cost and makespan for workflow scheduling in cloud using fuzzy dominance sort based HEFT

    No full text
    © 2018 Elsevier B.V. More and more enterprises and communities choose cloud computing platforms to deploy their commercial or scientific workflow applications along with the increasing popularity of pay-as-you-go cloud services. A major task of cloud service providers is to minimize the monetary cost and makespan of executing workflows in the Infrastructure as a Service (IaaS) cloud. Most of the existing techniques for cost and makespan minimization are designed for traditional computing platforms which cannot be applied to the cloud computing platforms with unique service-based resource managing methods and pricing strategies. In this paper, we study the joint optimization of cost and makespan of scheduling workflows in IaaS clouds, and propose a novel workflow scheduling scheme. In this scheme, a fuzzy dominance sort based heterogeneous earliest-finish-time (FDHEFT) algorithm is developed which closely integrates the fuzzy dominance sort mechanism with the list scheduling heuristic HEFT. Extensive experiments using the real-world and synthetic workflows demonstrate the efficacy of our scheme. Our scheme can achieve significantly better cost-makespan tradeoff fronts with remarkably higher Hypervolume and can run up to hundreds of times faster than the state-of-the-art algorithms

    Throughput-Conscious Energy Allocation and Reliability-Aware Task Assignment for Renewable Powered In-Situ Server Systems

    No full text
    In-situ (InS) server systems are typically deployed in special environments to handle in-situ workloads which are generated from environmentally sensitive areas or remote places lacking modern power supply infrastructure. This special operating environment of InS servers urges such systems to be powered by renewable energy. In addition, the InS systems are vulnerable to soft errors due to the harsh environments they deploy. This paper tackles the problem of allocating harvested energy to renewable powered servers and assigning the in-situ workloads to these servers for optimizing throughput of both the overall system and individual servers under energy and reliability constraints. We perform the energy allocation based on system state. In particular, for systems in low energy state, we propose a game theoretic approach that models the energy allocation as a cooperative game among multiple servers and derives a Nash bargaining solution. To meet the reliability constraint, we analyze the reliability optimality of assigning tasks to multiple servers and design a reliability-aware task assignment heuristic based on the analysis. Experimental results show that with a small time overhead, the proposed energy allocation approach achieves a high throughput from perspectives of both the overall system and individual servers, and the proposed task assignment approach ensures an increased system reliability.</p
    corecore