8 research outputs found

    Deep Neural Networks for Future Low Carbon Energy Technologies: Potential, Challenges and Economic Development

    Get PDF
    The global energy demands are growing every year, and fossil fuels won't be able to fulfill our energy needs in the near future. Carbon emissions from the fossil fuels hit an all-time high in 2018 due to increased energy consumption around the globe. On the other hand, renewable energy is an emerging technology and considered as a reliable alternative to the fossil fuels. It is much safer and cleaner than conventional sources. With the advancements in technology, the renewable energy sector has made significant progress in the last decade. One most significant challenge, the large scale renewable energy farms are facing is the un-predictability of the weather patterns. This stochastic nature of the weather data is impacting the solar and wind farms significantly. Although, the classical technologies are in place for weather forecasting but they are not efficient enough to give a feedback to the base-station for any sudden change or future predictions. The demand for renewable energy will only increase in the future. And, that is why renewable energy companies need to invest in Artificial Intelligence (AI), Internet-of-Things (IoT), and other emerging technologies to improve productivity and overcome the shortfalls. Even the large consumers of renewable energy, like supermarkets, factories, offices, railways can use AI technology to make data-driven decisions on power usage and demand. In this article, we present an overview of AI techniques for modelling, prediction and forecasting of wind farming data. Additionally, we have presented economic impact of low carbon energy techniques by analysing the climate change patterns and diverse sources of power generation for the Scotland, United Kingdom region, as a case study

    Predicting Top-of-Atmosphere Thermal Radiance Using MERRA-2 Atmospheric Data with Deep Learning

    No full text
    Image data from space-borne thermal infrared (IR) sensors are used for a variety of applications, however they are often limited by their temporal resolution (i.e., repeat coverage). To potentially increase the temporal availability of thermal image data, a study was performed to determine the extent to which thermal image data can be simulated from available atmospheric and surface data. The work conducted here explored the use of Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) developed by The National Aeronautics and Space Administration (NASA) to predict top-of-atmosphere (TOA) thermal IR radiance globally at time scales finer than available satellite data. For this case study, TOA radiance data was derived for band 31 (10.97 μ m) of the Moderate-Resolution Imaging Spectroradiometer (MODIS) sensor. Two approaches have been followed, namely an atmospheric radiative transfer forward modeling approach and a supervised learning approach. The first approach uses forward modeling to predict TOA radiance from the available surface and atmospheric data. The second approach applied four different supervised learning algorithms to the atmospheric data. The algorithms included a linear least squares regression model, a non-linear support vector regression (SVR) model, a multi-layer perceptron (MLP), and a convolutional neural network (CNN). This research found that the multi-layer perceptron model produced the lowest overall error rates with an root mean square error (RMSE) of 1.36 W/m 2 ·sr· μ m when compared to actual Terra/MODIS band 31 image data. These studies found that for radiances above 6 W/m 2 ·sr· μ m, the forward modeling approach could predict TOA radiance to within 12 percent, and the best supervised learning approach can predict TOA to within 11 percent

    Pavement life cycle assessment: from case study to machine learning modeling

    Get PDF
    Climate change is a global challenge with long-term implications. Human activities are changing the global climate system, and the warming of the climate system is undeniable. According to a roadway construction study, the construction of the surface layer of an asphalt pavement alone generates a carbon footprint of 65.8 kg of CO₂ per km. Therefore, a sensible approach to study environmental impact from road pavement is crucial. Pavement life cycle assessment (LCA) is a comprehensive method to evaluate the environmental impacts of a pavement section. It features a cradle-to-grave approach assessing critical stages of the pavement’s life. Material production, initial construction, maintenance, use and end of life phases exist in an entire pavement life cycle. The thesis consists of three components, which started with finding the environmental impact for different pavement maintenance and rehabilitation (M&R) techniques in the maintenance phase. The second component evaluated the environmental impact due to pavement vehicle interaction (PVI) in the use phase. Finally, the goal of the third component was to develop a set of pavement LCA models. To evaluate environmental impact for four major M&R techniques: rout and sealing, patching, hot in-place recycling (HIR) and cold in-place recycling (CIR), initially a fractional factorial design approach was applied to determine which factors were significant. Considering those significant factors and other necessary data, a hypothetical LCA case study was performed for the city of St. John’s. It was found that the global warming potential (GWP) held the highest values among four M&R techniques. CIR technique produced the lowest percentage of GWP (83.87%), and for asphalt patching, the CO₂ emission resulted in the highest percentage (92.22%) which became the least suitable option. To understand the PVI effect, the required data and information are collected from the Long-Term Pavement Performance (LTPP) program. Out of 141 Canadian road sections, 22 sections were selected. Several climatic parameters, including annual precipitation, annual temperature, and annual freezing index data, were collected from these 22 sections and further processed for developing clusters using a hierarchical clustering approach. Finally, the Athena Pavement LCA tool was used to measure the environmental impact from the PVI effect for each cluster. It was found that cluster 2 (high annual precipitation, high annual freezing index, and medium annual temperature) experienced the highest rate of IRI increase and therefore, high GWP value. The LCA result also indicated a relatively higher GWP due to pavement roughness from heavy vehicle traffic compared with light vehicle traffic. For the PVI effect due to pavement deflection, cluster 4 (maximum vehicle load and the minimum subgrade stiffness) emitted the highest GWP among all the clusters. Pavement LCA tools require an extensive amount of data to estimate the environmental impact. In the first and second studies, all Canadian road pavement sections were not possible to consider because of the large quantity of time consumption for LCA of each section. Therefore, a database management software, Microsoft SQL Server Management Studio, was used for filtering and data manipulation of the LTPP database considering all Canadian road sections. The manipulated data were further used to develop the LCA models using machine learning algorithms: multiple linear regression, polynomial regression, decision tree regression and support vector regression. The models determined the significant contributors and quantified the CO₂ emission in pavement material production, initial construction, maintenance and use phase. Model validation was also performed. The study also revealed the contribution of Canadian provinces’ CO₂ emission. The proposed LCA models will help the decision-makers in the pavement management system

    A data driven approach for diagnosis and management of yield variability attributed to soil constraints

    Get PDF
    Australian agriculture does not value data to the level required for true precision management. Consequently, agronomic recommendations are frequently based on limited soil information and do not adequately address the spatial variance of the constraints presented. This leads to lost productivity. Due to the costs of soil analysis, land owners and practitioners are often reluctant to invest in soil sampling exercises as the likely economic gain from this investment has not been adequately investigated. A value proposition is therefore required to realise the agronomic and economic benefits of increased site-specific data collection with the aim of ameliorating soil constraints. This study is principally concerned with identifying this value proposition by investigating the spatially variable nature of soil constraints and their interactions with crop yield at the sub-field scale. Agronomic and economic benefits are quantified against simulated ameliorant recommendations made on the basis of varied sampling approaches. In order to assess the effects of sampling density on agronomic recommendations, a 108 ha site was investigated, where 1200 direct soil measurements were obtained (300 sample locations at 4 depth increments) to form a benchmark dataset for analysis used in this study. Random transect sampling (for field average estimates), zone management, regression kriging (SSPFe) and ordinary kriging approaches were first investigated at various sampling densities (N=10, 20, 50, 100, 150, 200, 250 and 300) to observe the effects of lime and gypsum ameliorant recommendation advice. It was identified that the ordinary kriging method provided the most accurate spatial recommendation advice for gypsum and lime at all depth increments investigated (i.e. 0–10 cm, 10–20 cm, 20–40 cm and 40–60 cm), with the majority of improved accuracy being achieved up to 50 samples (≈0.5 samples/ha). The lack of correlation between the environmental covariates and target soil variables inhibited the ability for regression kriging to outperform ordinary kriging. To extend these findings in an attempt to identify the economically optimal sampling density for the investigation site, a yield prediction model was required to estimate the spatial yield response due to amelioration. Given the complex nonlinear relationships between soil properties and yield, this was achieved by applying four machine learning models (both linear and nonlinear) consisting of a mixed-linear regression, a regression tree (Cubist), an artificial neural network and a support vector machine. These were trained using the 1200 directly measured soil samples, each with 9 soil measurements describing structural features (i.e. soil pH, exchangeable sodium percentage, electrical conductivity, clay, silt, sand, bulk density, potassium, cation exchange capacity) to predict the spatial yield variability at the investigation site with four years of yield data. It was concluded that the Cubist regression tree model produced superior results in terms of improved generalization, whilst achieving an acceptable R2 for training and validation (up to R2 =0.80 for training and R2 =0.78 for validation). The lack of temporal yield information constrained the ability to develop a temporally stable yield prediction model to account for the uncertainties of climate interactions associated with the spatial variability of yield. Accurate predictive performance was achieved for single-season models. Of the spatial prediction methods investigated, random transect sampling and ordinary kriging approaches were adopted to simulate ‘blanket-rate’ (BR) and ‘variable-rate’ (VR) gypsum applications, respectively, for the amelioration of sodicity at the investigated site. For each sampling density, the spatial yield response as a result of a BR and VR application of gypsum was estimated by application of the developed Cubist yield prediction model, calibrated for the investigation site. Accounting for the cost of sampling and financial gains, due to a yield response, the most economically optimum sampling density for the investigation site was 0.2 cores/ha for 0–20 cm treatment and 0.5 cores/ha for 0–60 cm treatment taking a VR approach. Whilst this resulted in an increased soil data investment of 26.4/haand26.4/ha and 136/ha for 0–20 cm and 0–60 cm treatment respectively in comparison to a BR approach, the yield gains due to an improved spatial gypsum application were in excess of 6 t and 26 t per annum. Consequently, the net benefit of increased data investment was estimated to be up to $104,000 after 20 years for 0–60 cm profile treatment. Identifying the influence on qualitative data and management information on soil-yield interaction, a probabilistic approach was investigated to offer an alternative approach where empirical models fail. Using soil compaction as an example, a Bayesian Belief Network was developed to explore the interactions of machine loading, soil wetness and site characteristics with the potential yield declines due to compaction induced by agricultural traffic. The developed tool was subsequently able to broadly describe the agronomic impacts of decisions made in data limiting environments. This body of work presents a combined approach to improving both the diagnosis and management of soil constraints using a data driven approach. Subsequently, a detailed discussion is provided to further this work, and improve upon the results obtained. By continuing this work it is possible to change the industry attitude to data collection and significantly improve the productivity, profitability and soil husbandry of agricultural systems
    corecore