7,136 research outputs found

    A global view of shifting cultivation: Recent, current, and future extent

    Get PDF
    Mosaic landscapes under shifting cultivation, with their dynamic mix of managed and natural land covers, often fall through the cracks in remote sensingā€“based land cover and land use classifications, as these are unable to adequately capture such landscapesā€™ dynamic nature and complex spectral and spatial signatures. But information about such landscapes is urgently needed to improve the outcomes of global earth system modelling and large-scale carbon and greenhouse gas accounting. This study combines existing global Landsat-based deforestation data covering the years 2000 to 2014 with very high-resolution satellite imagery to visually detect the specific spatio-temporal pattern of shifting cultivation at a one-degree cell resolution worldwide. The accuracy levels of our classification were high with an overall accuracy above 87%. We estimate the current global extent of shifting cultivation and compare it to other current global mapping endeavors as well as results of literature searches. Based on an expert survey, we make a first attempt at estimating past trends as well as possible future trends in the global distribution of shifting cultivation until the end of the 21st century. With 62% of the investigated one-degree cells in the humid and sub-humid tropics currently showing signs of shifting cultivationā€”the majority in the Americas (41%) and Africa (37%)ā€”this form of cultivation remains widespread, and it would be wrong to speak of its general global demise in the last decades. We estimate that shifting cultivation landscapes currently cover roughly 280 million hectares worldwide, including both cultivated fields and fallows. While only an approximation, this estimate is clearly smaller than the areas mentioned in the literature which range up to 1,000 million hectares. Based on our expert survey and historical trends we estimate a possible strong decrease in shifting cultivation over the next decades, raising issues of livelihood security and resilience among people currently depending on shifting cultivation

    Development of Machine Learning based approach to predict fuel consumption and maintenance cost of Heavy-Duty Vehicles using diesel and alternative fuels

    Get PDF
    One of the major contributors of human-made greenhouse gases (GHG) namely carbon dioxide (CO2), methane (CH4), and nitrous oxide (NOX) in the transportation sector and heavy-duty vehicles (HDV) contributing to about 27% of the overall fraction. In addition to the rapid increase in global temperature, airborne pollutants from diesel vehicles also present a risk to human health. Even a small improvement that could potentially drive energy savings to the century-old mature diesel technology could yield a significant impact on minimizing greenhouse gas emissions. With the increasing focus on reducing emissions and operating costs, there is a need for efficient and effective methods to predict fuel consumption, maintenance costs, and total cost of ownership for heavy-duty vehicles. Every improvement so achieved in this direction is a direct contributor to driving the reduction in the total cost of ownership for a fleet owner, thereby bringing economic prosperity and reducing oil imports for the economy. Motivated by these crucial goals, the present research considers integrating data-driven techniques using machine learning algorithms on the historical data collected from medium- and heavy-duty vehicles. The primary motivation for this research is to address the challenges faced by the medium- and heavy-duty transportation industry in reducing emissions and operating costs. The development of a machine learning-based approach can provide a more accurate and reliable prediction of fuel consumption and maintenance costs for medium- and heavy-duty vehicles. This, in turn, can help fleet owners and operators to make informed decisions related to fuel type, route planning, and vehicle maintenance, leading to reduced emissions and lower operating costs. Artificial Intelligence (AI) in the automotive industry has witnessed massive growth in the last few years. Heavy-duty transportation research and commercial fleets are adopting machine learning (ML) techniques for applications such as autonomous driving, fuel economy/emissions, predictive maintenance, etc. However, to perform well, modern AI methods require a large amount of high-quality, diverse, and well-balanced data, something which is still not widely available in the automotive industry, especially in the division of medium- and heavy-duty trucks. The research methodology involves the collection of data at the West Virginia University (WVU) Center for Alternative Fuels, Engines, and Emissions (CAFEE) lab in collaboration with fleet management companies operating medium- and heavy-duty vehicles on diesel and alternative fuels, including compressed natural gas, liquefied propane gas, hydrogen fuel cells, and electric vehicles. The data collected is used to develop machine learning models that can accurately predict fuel consumption and maintenance costs based on various parameters such as vehicle weight, speed, route, fuel type, and engine type. The expected outcomes of this research include 1) the development of a neural network model 3 that can accurately predict the fuel consumed by a vehicle per trip given the parameters such as vehicle speed, engine speed, and engine load, and 2) the development of machine learning models for estimating the average cost-per-mile based on the historical maintenance data of goods movement trucks, delivery trucks, school buses, transit buses, refuse trucks, and vocational trucks using fuels such as diesel, natural gas, and propane. Due to large variations in maintenance data for vehicles performing various activities and using different fuel types, the regular machine learning or ensemble models do not generalize well. Hence, a mixed-effect random forest (MERF) is developed to capture the fixed and random effects that occur due to varying duty-cycle of vocational heavy-duty trucks that perform different tasks. The developed model helps in predicting the average maintenance cost given the vocation, fuel type, and region of operation, making it easy for fleet companies to make procurement decisions based on their requirement and total cost of ownership. Both the models can provide insights into the impact of various parameters and route planning on the total cost of ownership affected by the fuel cost and the maintenance and repairs cost. In conclusion, the development of a machine learning-based approach can provide a reliable and efficient solution to predict fuel consumption and maintenance costs impacting the total cost of ownership for heavy-duty vehicles. This, in turn, can help the transportation industry reduce emissions and operating costs, contributing to a more sustainable and efficient transportation system. These models can be optimized with more training data and deployed in a real-time environment such as cloud service or an onboard vehicle system as per the requirement of companies

    The Computational Difficulty of Bribery in Qualitative Coalitional Games

    Get PDF
    Qualitative coalitional games (QCG) are representations of coalitional games in which self interested agents, each with their own individual goals, group together in order to achieve a set of goals which satisfy all the agents within that group. In such a representation, it is the strategy of the agents to find the best coalition to join. Previous work into QCGs has investigated the computational complexity of determining which is the best coalition to join. We plan to expand on this work by investigating the computational complexity of computing agent power in QCGs as well as by showing that insincere strategies, particularly bribery, are possible when the envy-freeness assumption is removed but that it is computationally difficult to identify the best agents to bribe.Bribery, Coalition Formation, Computational Complexity

    Betting and Belief: Prediction Markets and Attribution of Climate Change

    Full text link
    Despite much scientific evidence, a large fraction of the American public doubts that greenhouse gases are causing global warming. We present a simulation model as a computational test-bed for climate prediction markets. Traders adapt their beliefs about future temperatures based on the profits of other traders in their social network. We simulate two alternative climate futures, in which global temperatures are primarily driven either by carbon dioxide or by solar irradiance. These represent, respectively, the scientific consensus and a hypothesis advanced by prominent skeptics. We conduct sensitivity analyses to determine how a variety of factors describing both the market and the physical climate may affect traders' beliefs about the cause of global climate change. Market participation causes most traders to converge quickly toward believing the "true" climate model, suggesting that a climate market could be useful for building public consensus.Comment: All code and data for the model is available at http://johnjnay.com/predMarket/. Forthcoming in Proceedings of the 2016 Winter Simulation Conference. IEEE Pres

    Cultural context shapes the carbon footprints of recipes

    Full text link
    Food systems are responsible for a third of global anthropogenic greenhouse gas emissions central to global warming and climate change. Increasing awareness of the environmental impact of food-centric emissions has led to the carbon footprint quantification of food products. However, food consumption is dictated by traditional dishes, the cultural capsules that encode traditional protocols for culinary preparations. Carbon footprint estimation of recipes will provide actionable insights into the environmental sustainability of culturally influenced patterns in recipe compositions. By integrating the carbon footprint data of food products with a gold-standard repository of recipe compositions, we show that the ingredient constitution dictates the carbon load of recipes. Beyond the prevalent focus on individual food products, our analysis quantifies the carbon footprint of recipes within the cultural contexts that shape culinary protocols. While emphasizing the widely understood harms of animal-sourced ingredients, this article presents a nuanced perspective on the environmental impact of culturally influenced dietary practices. Along with the grasp of taste and nutrition correlates, such an understanding can help design palatable and environmentally sustainable recipes. Systematic compilation of fine-grained carbon footprint data is the way forward to address the challenge of sustainably feeding an anticipated population of 10 billion.Comment: 37 pages (inclusive of Extended Figures and Supplementary Materials), 5 Main Figures, 6 Extended Figures, 3 Supplementary Figures, and 6 Supplementary Table

    Locating and quantifying gas emission sources using remotely obtained concentration data

    Full text link
    We describe a method for detecting, locating and quantifying sources of gas emissions to the atmosphere using remotely obtained gas concentration data; the method is applicable to gases of environmental concern. We demonstrate its performance using methane data collected from aircraft. Atmospheric point concentration measurements are modelled as the sum of a spatially and temporally smooth atmospheric background concentration, augmented by concentrations due to local sources. We model source emission rates with a Gaussian mixture model and use a Markov random field to represent the atmospheric background concentration component of the measurements. A Gaussian plume atmospheric eddy dispersion model represents gas dispersion between sources and measurement locations. Initial point estimates of background concentrations and source emission rates are obtained using mixed L2-L1 optimisation over a discretised grid of potential source locations. Subsequent reversible jump Markov chain Monte Carlo inference provides estimated values and uncertainties for the number, emission rates and locations of sources unconstrained by a grid. Source area, atmospheric background concentrations and other model parameters are also estimated. We investigate the performance of the approach first using a synthetic problem, then apply the method to real data collected from an aircraft flying over: a 1600 km^2 area containing two landfills, then a 225 km^2 area containing a gas flare stack

    Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning

    Full text link
    Machine learning (ML) requires using energy to carry out computations during the model training process. The generation of this energy comes with an environmental cost in terms of greenhouse gas emissions, depending on quantity used and the energy source. Existing research on the environmental impacts of ML has been limited to analyses covering a small number of models and does not adequately represent the diversity of ML models and tasks. In the current study, we present a survey of the carbon emissions of 95 ML models across time and different tasks in natural language processing and computer vision. We analyze them in terms of the energy sources used, the amount of CO2 emissions produced, how these emissions evolve across time and how they relate to model performance. We conclude with a discussion regarding the carbon footprint of our field and propose the creation of a centralized repository for reporting and tracking these emissions
    • ā€¦
    corecore