8,646 research outputs found

    Transformer Based Model for Predicting Rapid Impact Compaction Outcomes: A Case Study of Utapao International Airport

    Full text link
    This paper introduces a novel deep learning approach to predict the engineering properties of the ground improved by Rapid Impact Compaction (RIC), which is a ground improvement technique that uses a drop hammer to compact the soil and fill layers. The proposed approach uses transformer-based neural networks to capture the complex nonlinear relationships between the input features, such as the hammer energy, drop height, and number of blows, and the output variables, such as the cone resistance. The approach is applied to a real-world dataset from a trial test section for the new apron construction of the Utapao International Airport in Thailand. The results show that the proposed approach outperforms the existing methods in terms of prediction accuracy and efficiency and provides interpretable attention maps that reveal the importance of different features for RIC prediction. The paper also discusses the limitations and future directions of applying deep learning methods to RIC prediction

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Virtual metrology for plasma etch processes.

    Get PDF
    Plasma processes can present dicult control challenges due to time-varying dynamics and a lack of relevant and/or regular measurements. Virtual metrology (VM) is the use of mathematical models with accessible measurements from an operating process to estimate variables of interest. This thesis addresses the challenge of virtual metrology for plasma processes, with a particular focus on semiconductor plasma etch. Introductory material covering the essentials of plasma physics, plasma etching, plasma measurement techniques, and black-box modelling techniques is rst presented for readers not familiar with these subjects. A comprehensive literature review is then completed to detail the state of the art in modelling and VM research for plasma etch processes. To demonstrate the versatility of VM, a temperature monitoring system utilising a state-space model and Luenberger observer is designed for the variable specic impulse magnetoplasma rocket (VASIMR) engine, a plasma-based space propulsion system. The temperature monitoring system uses optical emission spectroscopy (OES) measurements from the VASIMR engine plasma to correct temperature estimates in the presence of modelling error and inaccurate initial conditions. Temperature estimates within 2% of the real values are achieved using this scheme. An extensive examination of the implementation of a wafer-to-wafer VM scheme to estimate plasma etch rate for an industrial plasma etch process is presented. The VM models estimate etch rate using measurements from the processing tool and a plasma impedance monitor (PIM). A selection of modelling techniques are considered for VM modelling, and Gaussian process regression (GPR) is applied for the rst time for VM of plasma etch rate. Models with global and local scope are compared, and modelling schemes that attempt to cater for the etch process dynamics are proposed. GPR-based windowed models produce the most accurate estimates, achieving mean absolute percentage errors (MAPEs) of approximately 1:15%. The consistency of the results presented suggests that this level of accuracy represents the best accuracy achievable for the plasma etch system at the current frequency of metrology. Finally, a real-time VM and model predictive control (MPC) scheme for control of plasma electron density in an industrial etch chamber is designed and tested. The VM scheme uses PIM measurements to estimate electron density in real time. A predictive functional control (PFC) scheme is implemented to cater for a time delay in the VM system. The controller achieves time constants of less than one second, no overshoot, and excellent disturbance rejection properties. The PFC scheme is further expanded by adapting the internal model in the controller in real time in response to changes in the process operating point

    Artificial Neural Networks in Agriculture

    Get PDF
    Modern agriculture needs to have high production efficiency combined with a high quality of obtained products. This applies to both crop and livestock production. To meet these requirements, advanced methods of data analysis are more and more frequently used, including those derived from artificial intelligence methods. Artificial neural networks (ANNs) are one of the most popular tools of this kind. They are widely used in solving various classification and prediction tasks, for some time also in the broadly defined field of agriculture. They can form part of precision farming and decision support systems. Artificial neural networks can replace the classical methods of modelling many issues, and are one of the main alternatives to classical mathematical models. The spectrum of applications of artificial neural networks is very wide. For a long time now, researchers from all over the world have been using these tools to support agricultural production, making it more efficient and providing the highest-quality products possible

    A data driven deep neural network model for predicting boiling heat transfer in helical coils under high gravity

    Get PDF
    In this article, a deep artificial neural network (ANN) model has been proposed to predict the boiling heat transfer in helical coils under high gravity conditions, which is compared with experimental data. A test rig is set up to provide high gravity up to 11 g with a heat flux up to 15100 W/m 2 and the mass velocity range from 40 to 2000 kg m −2 s −1. In the current work, a total 531 data samples have been used in the ANN model. The proposed model was developed in a Python Keras environment with Feed-forward Back-propagation (FFBP) Multi-layer Perceptron (MLP) using eight features (mass flow rate, thermal power, inlet temperature, inlet pressure, direction, acceleration, tube inner surface area, helical coil diameter) as the inputs and two features (wall temperature, heat transfer coefficient) as the outputs. The deep ANN model composed of three hidden layers with a total number of 1098 neurons and 300,266 trainable parameters has been found as optimal according to statistical error analysis. Performance evaluation is conducted based on six verification statistic metrics (R 2, MSE, MAE, MAPE, RMSE and cosine proximity) between the experimental data and predicted values. The results demonstrate that a 8-512-512-64-2 neural network has the best performance in predicting the helical coil characteristics with (R 2=0.853, MSE=0.018, MAE=0.074, MAPE=1.110, RMSE=0.136, cosine proximity=1.000) in the testing stage. It is indicated that with the utilisation of deep learning, the proposed model is able to successfully predict the heat transfer performance in helical coils, and especially achieved excellent performance in predicting outputs that have a very large range of value differences

    A Review of Fault Diagnosing Methods in Power Transmission Systems

    Get PDF
    Transient stability is important in power systems. Disturbances like faults need to be segregated to restore transient stability. A comprehensive review of fault diagnosing methods in the power transmission system is presented in this paper. Typically, voltage and current samples are deployed for analysis. Three tasks/topics; fault detection, classification, and location are presented separately to convey a more logical and comprehensive understanding of the concepts. Feature extractions, transformations with dimensionality reduction methods are discussed. Fault classification and location techniques largely use artificial intelligence (AI) and signal processing methods. After the discussion of overall methods and concepts, advancements and future aspects are discussed. Generalized strengths and weaknesses of different AI and machine learning-based algorithms are assessed. A comparison of different fault detection, classification, and location methods is also presented considering features, inputs, complexity, system used and results. This paper may serve as a guideline for the researchers to understand different methods and techniques in this field

    Estimation of near-surface Air temperature during day and night-time from MODIS over Different LC/LU Using machine learning methods in Berlin

    Get PDF
    Urbanization is manifest by changes in the physical structure of the land surface, owing to extensive construction features such as buildings, street canyons, changes in the thermal structure because of materials of different thermal properties and also intensive human activities. Urban areas are generally also characterized by higher surface air temperatures as compared to the rural surroundings. This temperature excess can be up to 10-12°C and more and is referred to as the urban heat island(UHI)phenomenon. Since residents living in cities are especially affected by extreme temperature events, urban climate studies are gaining in importance. Currently, more than half of the world s population already lives in urban areas, which accentuates the major role agglomerations must play in mitigation and adaptation to climate change. Recommendations regarding behavioural patterns during heat stress situations and urban planning measures require a comprehensive understanding of the inner urban temperature distribution including the identification of thermal hot spots. Both very cold and very hot temperatures could affect the human health. Excessive exposure to heat is referred to as heat stress and excessive exposure to cold is referred to as cold stress. Urban temperature data (2 m temperature data) is very important for all investigations on the urban heat island (UHI) effect, human health. They are usually either based on remote sensing techniques or air temperature measurements or from models. Remote sensing data like infra-red surface temperature from airborne measuring instruments may have a very high spatial resolution and are presently available for many urban areas, but only in clear sky cases. This spatial resolution is appropriate to exhibit typical urban structures that are expected to cause the UHI effect. Nevertheless, information on surface temperature cannot replace air temperature data, since beside the problem that the former is typically only available for single days, there is no fixed relation between surface and air temperatures. Especially for systematic analyses of the relationship between urban structures and 2m temperatures for different weather situations a large data basis is desirable. Air temperature data can be obtained from mobile measurements and measurement at permanent or temporary weather stations. On the one hand, the use of weather stations provides high data accuracy using a well-known standard technology. On the other hand, the spatial representation of weather station data within the urban environment, which is characterized by the surface composition including buildings, infrastructure and different types of land use, is very limited. Consequently, since the beginning of the 20th century, many efforts have been made to identify temperature patterns in urban areas with high spatial resolution instead of only using single point information. In this regard, in this study Air temperature (T2m) or Tair measurements from 20 ground weather stations in Berlin were used to estimate the relationship between air temperature and the remotely sensed land surface temperature (LST) measured by Moderate Resolution Imaging Spectroradiometer over different land-cover types (LCT). Knowing this relationship enables a better understanding of the magnitude and pattern of Urban Heat Island (UHI), by considering the contribution of land cover in the formation of UHI. In order to understand the seasonal behaviour of this relationship, the influence of the normalized difference vegetation index (NDVI) as an indicator of degree of vegetation on LST over different LCT was investigated. Next to it, to evaluate the influence of LCT, a regression analysis between LST and NDVI was made. The results demonstrate that the slope of regression depends on the LCT. It depicts a negative correlation between LST and NDVI over all LCTs. Our analysis indicates that the strength of correlations between LST and NDVI depends on the season, time of day, and land cover. This statistical analysis can also be used to assess the variation of the LST– T2m relationship during day- and night-time over different land covers. The results show that LSTDay and LSTNight are correlated significantly (p = 0.0001) with T2mDay(daytime air temperature) and T2mNight(night-time air temperature). The correlation (r) between LSTDay and TDay is higher in cold seasons than in warm seasons. Moreover, during cold seasons over every LCT, a higher correlation was observed during daytime than during night-time. In contrast, a reverse relationship was observed during warm seasons. It was found that in most cases, during daytime and in cold seasons, LST is lower than T2m. In warm seasons, however, a reverse relationship was observed over all land-cover types. In every season, LSTNight was lower than or close to T2mNight . Air temperature (Tair or T2m) is an important climatological variable for forest biosphere processes and climate change research. Due to the low density and the uneven distribution of weather stations, traditional ground-based observations cannot accurately capture the spatial distribution of Tair. Therefore, it is necessary to develop a method for the estimation of air temperature with reasonable accuracy and spatial and temporal resolution in the urban areas with low temperature gauge density. But since the estimation of meteorological variables using various statistical techniques (such as linear regression models or combined regression and kriging techniques for T interpolation) have been examined by many researchers and they came to conclusion that an appropriate machine learning technique could be a robust computational technique which has been used for the estimation of meteorological data as a function of the corresponding data of one or more reference stations. In this research, Tair in Berlin is estimated during the day and night-time over six land cover/land use (LC/LU) types by satellite remote sensing data over a large domain and a relatively long period (7 years). Aqua and Terra MODIS (Moderate Resolution Imaging Spectroradiometer) data and meteorological data for the period from 2007 to 2013 were collected to estimate Tair. Twelve environmental variables (land surface temperature (LST), normalized difference vegetation index (NDVI), Julian day, latitude, longitude, Emissivity31, Emissivity32, altitude, albedo, wind speed, wind direction and air pressure) were selected as predictors. Moreover, a comparison between LST from MODIS Terra and Aqua with daytime and night-time air temperatures (TDay , TNight) was done respectively and in addition, the spatial variability of LST and Tair relationship by applying a varying window size on the MODIS LST grid was examined. An analysis of the relationship between the observed Tair and the spatially averaged remotely sensed LST, indicated that 3 × 3 and 1 × 1 pixel size was the optimal window size for the statistical model estimating Tair from MODIS data during the day and night time, respectively. Three supervised learning methods (Adaptive Neuro Fuzzy Inference system (ANFIS), Artificial Neural Network (ANN) and Support vector machine (SVR)) were used to estimate Tair during the day and night-time, and their performances were validated by cross-validation for each LC/LU. by applying each technique, a estimator model of air temperature had been generated. The comparison between these methods has been done and finally we evaluated the accuracy of each model and choose the best one for the high-resolution temperature estimation. Moreover, tuning the hyper parameters of some models like SVR and ANN were investigated. For tuning the hyper parameters of SVR, Simulated Annealing (SA) was applied (SA-SVR model) and a multiple-layer feed-forward (MLF) neural networks with three layers and variable nodes in hidden layers had been applied with Levenberg-Marquardt back-propagation (LM-BP), in order to achieve higher accuracy in the estimation of Tair . Results indicated that the ANN model achieved better accuracy (RMSE=2.16°C, MAE =1.69°C, R2 =0.95) than SA-SVR model (RMSE= 2.50°C, MAE =1.92°C, R2=0.91) and ANFIS model (RMSE=2.88°C, MAE=2.2°C, R2=0.89) over six LC/LU during the day and night time. The Q-Q diagram of SA-SVR, ANFIS and ANN show that all three models slightly tended to underestimate and overestimate the extreme and low temperatures for all LC/LU classes during the day and night-time. The weak performance in the extreme and low temperatures are a consequence of the small numbers of data in these temperatures. These satisfactory results indicate that this approach is proper for estimating air temperature and spatial window size is an important factor that should be considered in the estimation of air temperature. Moreover, for better understanding the relationship between LST and Tair in Berlin during day and night-time, over six land LC/LU types namely airport, agriculture, urban area, forest, industrial and needle leaf trees, two input variable selection methods were applied. Input variable selection is an essential step in environmental, biological, industrial and climatological applications. One approach which help us in better understanding data, decreasing computation effort, the impact of curse of dimensionality and improving the estimator performance. Through input variable selection the irrelevant or redundant variables will be to eliminated therefore a suitable subset of variables is identified as the input of a model. Meanwhile, the complexity of the model structure is simplified, and the computational efficiency is improved. In this work, the two input variable selection methods, including brute force search and greedy best search algorithm using artificial neural network (ANN) were considered for estimating of near surface air temperature from MODIS over six LC/LU types. The motivation behind this research was to formulate a more efficient way of choosing input variables using ANN models of environmental processes. Moreover, AIC, BIC and RMSE are considered for ranking the features and finding a subset of potential variables which improves the overall estimation performance. In this study, Aqua and Terra MODIS data and meteorological data for the period from 2007 to 2013 were collected to estimate Tair .Moreover, twelve environmental variables LST, normalized difference vegetation index (NDVI), Julian day, latitude, longitude, Emis31, Emis32, altitude, albedo, wind speed, wind direction and air pressure were selected as predictors. The results show that the LC/LU has a key factor in the relationship between Tair and LST. The results show that the effectiveness of optimal models in estimation Tair is varied in different LC/LU because of the specific heat capacities of different LC/LU. Air temperature mainly rely on the heat transfer process which was significantly affected by the local radiation budget. Generally, air is heated much quicker over barren land than forest because, barren land has lower heat capacity than forest. Vegetation can cause to latent heat flux, such as enhancing or reducing transpiration and cool the Tair in forests. In this study, the cooling effect was not taken into account because of roughly distribution of meteorological stations across different vegetation types. Therefore, it was difficult to consider the vegetation type in our models. However, land cover also affected land surface albedo, thus, the influence of LU/LC on estimating Tair was conditional and time dependent because different variables are selected for the same LU/LC during day and night-time. Moreover, another issue that we tried to find an answer was, what is the pitfall of using the global model and what is the advantage of features selection? It has been debated that inferencing from a model with all the features which thought to be important is simple and avoid the complications of model selection.Urbanisierung stellt eine Veränderungen in der physikalischen Struktur der Landoberfläche durch umfangreiche Konstruktionsmerkmale, wie Gebäude und Straßenschluchten, dar. Die damit verbundenen Änderungen der thermischen Struktur durch Verwendung von Materialien mit unterschiedlichen thermischen Eigenschaften sowie intensive menschliche Aktivität spielen hierbei eine wichtige Rolle. Urbane Gebiete sind im Allgemeinen durch eine höhere Oberflächentemperatur im Vergleich zur ländlichen Umgebung gekennzeichnet. Der Temperaturüberschuss kann bis zu 10-12°C und mehr betragen und wird als Phänomen der städtischen Wärmeinsel (UHI) bezeichnet. Da in Städten lebende Menschen besonders stark von extremen Temperaturereignissen betroffen sind, gewinnen Studien zum urbanen Klima vermehrt an Bedeutung. Derzeit lebt mehr als die Hälfte der Weltbevölkerung in urbanen Gebieten. Dies unterstreicht die wichtige Rolle die Ballungsräume in Bezug auf Minderung und Anpassung an den Klimawandel darstellt. Empfehlungen bezüglich des Verhaltens während Hitzestresssituationen sowie städtebauliche Maßnahmen erfordern ein umfangreiches Verständnis der innerstädtischen Temperaturverteilung einschließlich der Identifizierung von thermischen Hotspots. Sehr kalte wie auch sehr heiße Temperaturen können gleichermaßen die menschliche Gesundheit beeinträchtigen. Übermäßige Hitzebelastung wird als Hitzestress bezeichnet, übermäßige Kältebelastung als Kältestress. Urbane Temperaturdaten (2m Temperaturdaten) sind wichtig für alle Untersuchungen bezüglich des urbanen Wärmeinseleffekts (UHI), der menschlichen Gesundheit. Normalerweise basieren die Daten entweder auf Fernerkundungstechniken oder auf Messungen oder Simulationen der Lufttemperatur. Fernerkundungsdaten, wie die der Infrarot-Oberflächentemperatur von satellitengestützen Messinstrumenten können eine sehr hohe räumliche Auflösung haben und sind gegenwärtig für viele urbane Gebiete zugänglich, doch nur im Fall von wolkenfreiem Himmel. Die räumliche Auflösung ist dafür geeignet typische urbane Strukturen zu erkennen, die den UHI-Effekt auslösen. Dennoch können Informationen der Oberflächentemperatur, die Lufttemperaturdaten nicht ersetzen, da neben dem Problem, dass die Oberflächentemperatur in der Regel nur für einzelne Tage zur Verfügung steht, es keinen festen Zusammenhang zwischen Oberflächentemperatur und Lufttemperatur besteht. Besonders für systematische Analysen der Zusammenhänge zwischen urbanen Strukturen und der 2m-Temperatur unterschiedlicher Wettersituationen ist eine hohe Datenbasis wünschenswert. Daten der Lufttemperatur können von mobilen Messungen und permanenten Messstationen oder von temporären Wetterstationen erhalten werden. Einerseits bieten die Wetterstationen eine hohe Datenqualität durch Verwendung von bekannten Standard-Technologien; andererseits ist die räumliche Verteilung der Wetterstationsdaten in der urbanen Umgebung, die durch die oberflächliche Komposition von Gebäuden, Infrastruktur und verschiedenen Landnutzungsklassen charakterisiert ist, sehr eingeschränkt. Seit Beginn des 20. Jahrhunderts konnten somit viele Vorzüge bei der Identifizierung von Temperaturmustern in urbanen Gebieten mit hoher räumlicher Auflösung erzielt werden anstatt nur einzelne Punktinformationen zu nutzen. Folglich werden für diese Studie Lufttemperatur (T2m) oder Tair Messungen von 20 Bodenwetterstationen in Berlin verwendet, um den Zusammenhang zwischen Lufttemperatur und Fernerkundungsdaten der Oberflächentemperatur (LST) gemessen vom Moderate Resolution Imaging Spectroradiometer (MODIS) über verschiedene Landnutzungstypen (LCT). Die Kenntnis über diesen Zusammenhang ermöglicht ein besseres Verständnis der Stärke und Muster von urbanen Wärmeinseln (UHI) durch Beachtung der Verteilung der Oberflächenbeschaffenheit bei Ausbildung von UHI. Um das saisonale Verhalten dieses Zusammenhangs zu verstehen, wurde der Einfluss des normalisierten Differenzvegetationsindex (NDVI) als ein Indikator für den Vegetationsgrad auf LST über verschiedene LCT untersucht. Darüber hinaus wurde eine Regressionsanalyse zwischen der LST und dem NDVI durchgeführt, um den Einfluss der LCT zu bewerten. Die Ergebnisse zeigen, dass die Steigung der Regressionsgeraden von der LST abhängt. Es besteht eine negative Korrelation zwischen LST und NDVI über alle LCTs. Unsere Analyse signalisiert, dass die Stärke der Korrelation zwischen LST und NDVI von der Jahreszeit, der Tageszeit sowie der Landnutzung abhängig ist. Die statistische Analyse kann auch verwendet werden, um die Variation der LST-T2m-Beziegung während der Tages- und Nachtzeit über verschiedene Bodenbedeckungen zu bewerten. Die Ergebnisse zeigen eine signifikante Korrelation (p=0.0001) von LSTday und LSTnight mit der T2mDay (Lufttemperatur tagsüber) und der T2mNight (Lufttemperatur nachts). Zwischen LSTday und Tday ist die Korrelation (r) in der kalten Jahreszeit höher als in der warmen. Darüber hinaus wurde eine höhere Korrelation während der kalten Jahreszeit über alle LCTs am Tag beobachtet als in der Nacht. In der warmen Jahreszeit wurde im Gegensatz dazu ein umgekehrter Zusammenhang festgestellt. Es wurde beobachtet, dass in den meisten Fällen, tagsüber und in kalten Jahreszeiten, die LST niedriger ist als die T2m. In warmen Jahreszeiten wurde jedoch ein umgekehrter Zusammenhang über alle Landbedeckungsarten beobachtet. In jeder Saison war die LSTNight niedriger oder fast gleich wie die T2mNight. Die Lufttemperatur (Tair oder T2m) ist eine wichtige klimatologischen Variable für Prozesse der Waldbiosphäre und die Erforschung des Klimawandels. Aufgrund der geringen Dichte und der ungleichmäßigen Verteilung von Wetterstationen können herkömmliche bodengebundene Beobachtungen die räumliche Verteilung von Tair nicht genau erfassen. Daher ist es notwendig, eine Methode zur Abschätzung der Lufttemperatur mit angemessener Genauigkeit sowie räumlicher und zeitlicher Auflösung in urbanen Gebieten mit niedriger Temperaturmessdichte zu entwickeln. Da aber die Abschätzung meteorologischer Variablen mit verschiedenen statistischen Techniken (wie linearen Regressionsmodellen und kombinierten Regressions- und Krigingtechniken für die T-Interpolation) von vielen Forschern untersucht wurde, kamen sie zu dem Schluss, dass eine geeignete ‚machine learning‘ Technik eine robuste Rechentechnik sein könnte, die für die Abschätzung meteorologischer Daten in Abhängigkeit von entsprechenden Daten einer oder mehrerer Referenzstationen verwendete. In dieser Studie wird Tair in Berlin tagsüber sowie nachts über sechs Landbedeckungs/Landnutzungsarten (LC/LU) mittels Satelliten-Fernerkundungsdaten über einen großen Bereich und einem relativ langen Zeitraum (7Jahre) geschätzt. Daten des ‚Terra und Aqua MODIS‘ (Moderate Resolution Imaging Spectroradiometer) und meteorologische Daten für den Zeitraum von 2007 bis 2013 wurden gesammelt, um Tair zu bestimmen. Als Prädikatoren wurden zwölf Umweltvariablen (Landoberflächentemperatur (LST), normalisierter Differenzvegetationsindex (NDVI), Julianischer Tag, Breitengrad, Längengrad, Emissionsgrad 31, Emissionsgrad 32, Höhe, Albedo, Windgeschwindigkeit, Windrichtung und Luftdruck) ausgewählt. Drüber hinaus wurde ein Vergleich zwischen LST von MODIS Terra und Aqua mit Tages- und Nachtlufttemperaturen (TDay , TNight) durchgeführt bzw. zusätzlich die räumliche Variabilität des Zusammenhangs von LST und Tair durch Anwendung einer variierenden Fenstergröße auf das MODIS LST-Gitter untersucht. Eine Analyse der Beziehung zwischen der beobachteten Tair und dem räumlich gemittelten Fernerkundungs-LST ergab, dass die Größe 3 x 3 und 1 x 1 Pixel die optimale Fenstergröße für das statistische Modell war, das Tair aus den MODIS-Daten während Tages- bzw. Nachtzeit schätzte. Drei überwachte Lernmethoden (Adaptive Neuro Fuzzy Inference system (ANFIS), künstliches neuronales Netzwerk (ANN) und Support vector machine (SVR)) wurden verwendet, um Tair währen des Tages und der Nacht zu schätzen. Die Leistungen wurden durch Kreuzvalidierung für jede LC/LU validiert. Durch die Anwendung jeder Technik wurde ein Schätzmodell ausgewertet und das Beste für die hochauflösende Temperaturschätzung ausgewählt. Darüber hinaus wurde die Einstellung der Hyperparameter einiger Modelle wie SVR und ANN untersucht. Für die Einstellung der Hyperparameter von SVR wurde ‚Simulated Annealing‘ (SA) angewendet (SA-SVR Modell). Mit der Levenberg-Marquardt Backpropagation (LM-BP) wurde ein mehrschichtiges Feed-forward (MLF) neuronales Netzwerk mit drei Schichten und variablen Knoten in versteckten Schichten angewendet, um eine höhere Genauigkeit bei der Schätzung von Tair zu erreichen. Die Ergebnisse zeigten, dass das ANN-Modell über sechs LC/LU, tags sowie nachts, eine höhere Genauigkeit erreichte (RMSE=2.16°C, MAE =1.69°C, R2 =0.95) als das SA-SVR-Modell (RMSE= 2.50°C, MAE =1.92°C, R2=0.91) und das ANFIS-Modell (RMSE=2.88°C, MAE=2.2°C, R2=0.89). Das Q-Q-Diagramm von SA-SVR, ANFIS und ANN zeigt, dass alle drei Modelle die extrem hohen und niedrigen Temperaturen für alle LC/LU-Klassen tagsüber sowie nachts leicht unterschätzen und überschätzen. Die schwache Leistung bei extrem hohen und niedrigen Temperaturen ist eine Folge der geringen Datenmenge bei diesen Temperaturen. Um den Zusammenhang zwischen LST und Tair in Berlin bei Tag und Nacht besser verstehen zu können, wurden über sechs Land-LC/LU-Typen (Flughafen, Landwirtschaft, urbanes Gebiet, Wald-, Industrie- und Nadelblattbäume) zwei Auswahlmethoden für die Eingangsvariablen angewendet. Die Auswahl dieser Variablen ist ein wesentlicher Schritt in ökologischen, biologischen, industriellen und klimatologischen Anwendungen. Ein Ansatz, der uns hilft, Daten besser zu verstehen, den Rechenaufwand zu verringern, die Auswirkungen des Fluches der Dimensionalität und die Lei

    Transient stability assessment of hybrid distributed generation using computational intelligence approaches

    Get PDF
    Includes bibliographical references.Due to increasing integration of new technologies into the grid such as hybrid electric vehicles, distributed generations, power electronic interface circuits, advanced controllers etc., the present power system network is now more complex than in the past. Consequently, the recent rate of blackouts recorded in some parts of the world indicates that the power system is stressed. The real time/online monitoring and prediction of stability limit is needed to prevent future blackouts. In the last decade, Distributed Generators (DGs) among other technologies have received increasing attention. This is because DGs have the capability to meet peak demand, reduce losses, due to proximity to consumers and produce clean energy and thus reduce the production of CO₂. More benefits can be obtained when two or more DGs are combined together to form what is known as Hybrid Distributed Generation (HDG). The challenge with hybrid distributed generation (HDG) powered by intermittent renewable energy sources such as solar PV, wind turbine and small hydro power is that the system is more vulnerable to instabilities compared to single renewable energy source DG. This is because of the intermittent nature of the renewable energy sources and the complex interaction between the DGs and the distribution network. Due to the complexity and the stress level of the present power system network, real time/online monitoring and prediction of stability limits is becoming an essential and important part of present day control centres. Up to now, research on the impact of HDG on the transient stability is very limited. Generally, to perform transient stability assessment, an analytical approach is often used. The analytical approach requires a large volume of data, detailed mathematical equations and the understanding of the dynamics of the system. Due to the unavailability of accurate mathematical equations for most dynamic systems, and given the large volume of data required, the analytical method is inadequate and time consuming. Moreover, it requires long simulation time to assess the stability limits of the system. Therefore, the analytical approach is inadequate to handle real time operation of power system. In order to carry out real time transient stability assessment under an increasing nonlinear and time varying dynamics, fast scalable and dynamic algorithms are required. Transient Stability Assessment Of Hybrid Distributed Generation Using Computational Intelligence Approaches These algorithms must be able to perform advanced monitoring, decision making, forecasting, control and optimization. Computational Intelligence (CI) based algorithm such as neural networks coupled with Wide Area Monitoring System (WAMS) such as Phasor Measurement Unit (PMUs) have been shown to successfully model non-linear dynamics and predict stability limits in real time. To cope with the shortcoming of the analytical approach, a computational intelligence method based on Artificial Neural Networks (ANNs) was developed in this thesis to assess transient stability in real time. Appropriate data related to the hybrid generation (i.e., Solar PV, wind generator, small hydropower) were generated using the analytical approach for the training and testing of the ANN models. In addition, PMUs integrated in Real Time Digital Simulator (RTDS) were used to gather data for the real time training of the ANNs and the prediction of the Critical Clearing Time (CCT)

    Nonlinear Evapotranspiration Modeling Using Artificial Neural Networks

    Get PDF
    Reference evapotranspiration (ETo) is an important and one of the most difficult components of the hydrologic cycle to quantify accurately. Estimation/measurement of ETo is not simple as there are number of climatic parameters that can affect the process. There exists copious conventional (direct and indirect) and non conventional/soft computing (artificial neural networks, ANNs) methods for estimating ETo. Direct methods have the limitations of measurement errors, expensive, impracticality of acquiring point measurements for spatially variable locations, whereas the indirect methods have the limitations of unavailability of all necessary climate data and lack of generalizability (needs local calibration). In contrast to conventional methods, soft computing models can estimate ETo accurately with minimum climate data which have advantages over limitations of conventional ETo methods. This chapter reviews the application of ANN methods in estimating ETo accurately for 15 locations in India using six climatic variables as input. The performance of ANN models were compared with the multiple linear regression (MLR) models in terms of root mean squared error, coefficient of determination and ratio of average output and target ETo values. The results suggested that the ANN models performed better as compared to MLR for all locations
    corecore