312 research outputs found

    Deep reinforcement learning for large-eddy simulation modeling in wall-bounded turbulence

    Full text link
    The development of a reliable subgrid-scale (SGS) model for large-eddy simulation (LES) is of great importance for many scientific and engineering applications. Recently, deep learning approaches have been tested for this purpose using high-fidelity data such as direct numerical simulation (DNS) in a supervised learning process. However, such data are generally not available in practice. Deep reinforcement learning (DRL) using only limited target statistics can be an alternative algorithm in which the training and testing of the model are conducted in the same LES environment. The DRL of turbulence modeling remains challenging owing to its chaotic nature, high dimensionality of the action space, and large computational cost. In the present study, we propose a physics-constrained DRL framework that can develop a deep neural network (DNN)-based SGS model for the LES of turbulent channel flow. The DRL models that produce the SGS stress were trained based on the local gradient of the filtered velocities. The developed SGS model automatically satisfies the reflectional invariance and wall boundary conditions without an extra training process so that DRL can quickly find the optimal policy. Furthermore, direct accumulation of reward, spatially and temporally correlated exploration, and the pre-training process are applied for the efficient and effective learning. In various environments, our DRL could discover SGS models that produce the viscous and Reynolds stress statistics perfectly consistent with the filtered DNS. By comparing various statistics obtained by the trained models and conventional SGS models, we present a possible interpretation of better performance of the DRL model

    Development and Implementation of Novel Intelligent Motor Control for Performance Enhancement of PMSM Drive in Electrified Vehicle Application

    Get PDF
    The demand for electrified vehicles has grown significantly over the last decade causing a shift in the automotive industry from traditional gasoline vehicles to electric vehicles (EVs). With the growing evolution of EVs, high power density, and high efficiency of electric powertrains (e–drive) are of the utmost need to achieve an extended driving range. However, achieving an extended driving range with enhanced e-drive performance is still a bottleneck. The control algorithm of e–drive plays a vital role in its performance and reliability over time. Artificial intelligence (AI) and machine learning (ML) based intelligent control methods have proven their continued success in fault determination and analysis of motor–drive systems. Considering the potential of intelligent control, this thesis investigates the legacy space vector modulation (SVM) strategy for wide–bandgap (WBG) inverter and conventional current PI controller for permanent magnet synchronous motor (PMSM) control to reduce the switching loss, computation time and enhance transient performance in the available state–of–the-art e–drive systems. The thesis converges on AI– and ML–based control for e–drives to enhance the performance by focusing in reducing switching loss using ANN–based modulation technique for GaN–based inverter and improving transient performance of PMSM by incorporating ML–based parameter independent controller

    Optimal control towards sustainable wastewater treatment plants based on multi-agent reinforcement learning

    Full text link
    Wastewater treatment plants are designed to eliminate pollutants and alleviate environmental pollution. However, the construction and operation of WWTPs consume resources, emit greenhouse gases (GHGs) and produce residual sludge, thus require further optimization. WWTPs are complex to control and optimize because of high nonlinearity and variation. This study used a novel technique, multi-agent deep reinforcement learning, to simultaneously optimize dissolved oxygen and chemical dosage in a WWTP. The reward function was specially designed from life cycle perspective to achieve sustainable optimization. Five scenarios were considered: baseline, three different effluent quality and cost-oriented scenarios. The result shows that optimization based on LCA has lower environmental impacts compared to baseline scenario, as cost, energy consumption and greenhouse gas emissions reduce to 0.890 CNY/m3-ww, 0.530 kWh/m3-ww, 2.491 kg CO2-eq/m3-ww respectively. The cost-oriented control strategy exhibits comparable overall performance to the LCA driven strategy since it sacrifices environmental bene ts but has lower cost as 0.873 CNY/m3-ww. It is worth mentioning that the retrofitting of WWTPs based on resources should be implemented with the consideration of impact transfer. Specifically, LCA SW scenario decreases 10 kg PO4-eq in eutrophication potential compared to the baseline within 10 days, while significantly increases other indicators. The major contributors of each indicator are identified for future study and improvement. Last, the author discussed that novel dynamic control strategies required advanced sensors or a large amount of data, so the selection of control strategies should also consider economic and ecological conditions

    Amoeba: Circumventing ML-supported Network Censorship via Adversarial Reinforcement Learning

    Full text link
    Embedding covert streams into a cover channel is a common approach to circumventing Internet censorship, due to censors' inability to examine encrypted information in otherwise permitted protocols (Skype, HTTPS, etc.). However, recent advances in machine learning (ML) enable detecting a range of anti-censorship systems by learning distinct statistical patterns hidden in traffic flows. Therefore, designing obfuscation solutions able to generate traffic that is statistically similar to innocuous network activity, in order to deceive ML-based classifiers at line speed, is difficult. In this paper, we formulate a practical adversarial attack strategy against flow classifiers as a method for circumventing censorship. Specifically, we cast the problem of finding adversarial flows that will be misclassified as a sequence generation task, which we solve with Amoeba, a novel reinforcement learning algorithm that we design. Amoeba works by interacting with censoring classifiers without any knowledge of their model structure, but by crafting packets and observing the classifiers' decisions, in order to guide the sequence generation process. Our experiments using data collected from two popular anti-censorship systems demonstrate that Amoeba can effectively shape adversarial flows that have on average 94% attack success rate against a range of ML algorithms. In addition, we show that these adversarial flows are robust in different network environments and possess transferability across various ML models, meaning that once trained against one, our agent can subvert other censoring classifiers without retraining

    Generative adversarial network for predictive maintenance of a packaging machine

    Get PDF
    Generative models have been designed to discover and learn the latent structure of the input data in order to generate new samples based on the regularities discovered in the data. Starting from the first simplest models such as the Restricted Boltzmann Machines up to the Variational Autoencoders and Generative Adversarial Networks (or GAN), these models have experienced a surprising development in generating data as similar to reality as possible. The potential of these models, especially in Deep Learning, has led to the most disparate applications: generation of images, videos and music, image-to-image translations, text-to-image translation and conversion of low resolution images to high resolution, to name a few. In this thesis work, carried out during the internship period of the Master's Degree, the main focus is on GANs, a generative model that makes use of the principles of supervised training through the use of two competing "sub models": a generator, trained to produce new realistic samples, and a discriminator, which tries to distinguish between real and generated data. Usually, when this model is employed, the focus is mainly on the role of the generator used to produce new data. In this case, however, the idea is to use the discriminator as a binary classifier in the context of Predictive Maintenance of a packaging machine. In other words, the discriminator obtained as a result of GAN training is used to classify the state of the machine as either normal or critical. After an initial pre-processing and exploration of the datasets, the results obtained are compared with other classifiers. Finally, the limits and possible developments of this approach are discussed.Generative models have been designed to discover and learn the latent structure of the input data in order to generate new samples based on the regularities discovered in the data. Starting from the first simplest models such as the Restricted Boltzmann Machines up to the Variational Autoencoders and Generative Adversarial Networks (or GAN), these models have experienced a surprising development in generating data as similar to reality as possible. The potential of these models, especially in Deep Learning, has led to the most disparate applications: generation of images, videos and music, image-to-image translations, text-to-image translation and conversion of low resolution images to high resolution, to name a few. In this thesis work, carried out during the internship period of the Master's Degree, the main focus is on GANs, a generative model that makes use of the principles of supervised training through the use of two competing "sub models": a generator, trained to produce new realistic samples, and a discriminator, which tries to distinguish between real and generated data. Usually, when this model is employed, the focus is mainly on the role of the generator used to produce new data. In this case, however, the idea is to use the discriminator as a binary classifier in the context of Predictive Maintenance of a packaging machine. In other words, the discriminator obtained as a result of GAN training is used to classify the state of the machine as either normal or critical. After an initial pre-processing and exploration of the datasets, the results obtained are compared with other classifiers. Finally, the limits and possible developments of this approach are discussed

    Application of Artificial Intelligence algorithms to support decision-making in agriculture activities

    Get PDF
    Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important in which the application of artificial intelligence algorithms, and particularly, of deep learning needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models for decision-making can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other sectors. Recent scientific developments in the field of deep learning, applied to agriculture, are reviewed and some challenges and potential solutions using deep learning algorithms in agriculture are discussed. Higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested. The ability of artificial neural networks, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BLSTM), to model daily reference evapotranspiration and soil water content is investigated. The application of these techniques to predict these parameters was tested for three sites in Portugal. A single-layer BLSTM with 512 nodes was selected. Bayesian optimization was used to determine the hyperparameters, such as learning rate, decay, batch size, and dropout size. The model achieved mean square error (MSE) values ranging from 0.07 to 0.27 (mm d–1)² for ETo (Reference Evapotranspiration) and 0.014 to 0.056 (m³m–3)² for SWC (Soil Water Content), with R2 values ranging from 0.96 to 0.98. A Convolutional Neural Network (CNN) model was added to the LSTM to investigate potential performance improvement. Performance dropped in all datasets due to the complexity of the model. The performance of the models was also compared with CNN, traditional machine learning algorithms Support Vector Regression, and Random Forest. LSTM achieved the best performance. Finally, the impact of the loss function on the performance of the proposed models was investigated. The model with the mean square error (MSE) as loss function performed better than the model with other loss functions. Afterwards, the capabilities of these models and their extension, BLSTM and Bidirectional Gated Recurrent Units (BGRU) to predict end-of-season yields are investigated. The models use historical data, including climate data, irrigation scheduling, and soil water content, to estimate endof- season yield. The application of this technique was tested for tomato and potato yields at a site in Portugal. The BLSTM network outperformed the GRU, the LSTM, and the BGRU networks on the validation dataset. The model was able to capture the nonlinear relationship between irrigation amount, climate data, and soil water content and predict yield with an MSE of 0.017 to 0.039 kg/ha. The performance of the BLSTM in the test was compared with the most commonly used deep learning method called CNN, and machine learning methods including a Multi-Layer Perceptrons model and Random Forest regression. The BLSTM out-performed the other models with a R2-score between 0.97 and 0.99. The results show that analyzing agricultural data with the LSTM model improves the performance of the model in terms of accuracy. The CNN model achieved the second-best performance. Therefore, the deep learning model has a remarkable ability to predict the yield at the end of the season. Additionally, a Deep Q-Network was trained for irrigation scheduling. The agent was trained to schedule irrigation for a tomato field in Portugal. Two LSTM models trained previously were used as the agent environment. One predicts the total water in the soil profile on the next day. The other one was employed to estimate the yield based on the environmental condition during a season and then measure the net return. The agent uses this information to decide the following irrigation amount. LSTM and CNN networks were used to estimate the Q-table during training. Unlike the LSTM model, the ANN and the CNN could not estimate the Qtable, and the agent’s reward decreased during training. The comparison of the performance of the model was done with fixed-base irrigation and threshold-based irrigation. The trained model increased productivity by 11% and decreased water consumption by 20% to 30% compared to the fixed method. Also, an on-policy model, Advantage Actor–Critic (A2C), was implemented to compare irrigation scheduling with Deep Q-Network for the same tomato crop. The results show that the on-policy model A2C reduced water consumption by 20% compared to Deep Q-Network with a slight change in the net reward. These models can be developed to be applied to other cultures with high importance in Portugal, such as fruit, cereals, and grapevines, which also have large water requirements. The models developed along this thesis can be re-evaluated and trained with historical data from other cultures with high production in Portugal, such as fruits, cereals, and grapes, which also have high water demand, to create a decision support and recommendation system that tells farmers when and how much to irrigate. This system helps farmers avoid wasting water without reducing productivity. This thesis aims to contribute to the future steps in the development of precision agriculture and agricultural robotics. The models developed in this thesis are relevant to support decision-making in agricultural activities, aimed at optimizing resources, reducing time and costs, and maximizing production.Nos últimos anos, a técnica de aprendizagem profunda (Deep Learning) foi aplicada com sucesso ao reconhecimento de imagem, reconhecimento de fala e processamento de linguagem natural. Assim, tem havido um incen tivo para aplicá-la também em outros sectores. O sector agrícola é um dos mais importantes, em que a aplicação de algoritmos de inteligência artificial e, em particular, de deep learning, precisa ser explorada, pois tem impacto direto no bem-estar humano. Em particular, há uma necessidade de explorar como os modelos de aprendizagem profunda para a tomada de decisão podem ser usados como uma ferramenta para cultivo ou plantação ideal, uso da terra, melhoria da produtividade, controlo de produção, de doenças, de pragas e outras atividades. A grande quantidade de dados recebidos de sensores em explorações agrícolas inteligentes (smart farms) possibilita o uso de deep learning como modelo para tomada de decisão nesse campo. Na agricultura, não há dois ambientes iguais, o que torna o teste, a validação e a implementação bem-sucedida dessas tecnologias muito mais complexas do que na maioria dos outros setores. Desenvolvimentos científicos recentes no campo da aprendizagem profunda aplicada à agricultura, são revistos e alguns desafios e potenciais soluções usando algoritmos de aprendizagem profunda na agricultura são discutidos. Maior desempenho em termos de precisão e menor tempo de inferência pode ser alcançado, e os modelos podem ser úteis em aplicações do mundo real. Por fim, são sugeridas algumas oportunidades para futuras pesquisas nesta área. A capacidade de redes neuronais artificiais, especificamente Long Short-Term Memory (LSTM) e LSTM Bidirecional (BLSTM), para modelar a evapotranspiração de referência diária e o conteúdo de água do solo é investigada. A aplicação destas técnicas para prever estes parâmetros foi testada em três locais em Portugal. Um BLSTM de camada única com 512 nós foi selecionado. A otimização bayesiana foi usada para determinar os hiperparâmetros, como taxa de aprendizagem, decaimento, tamanho do lote e tamanho do ”dropout”. O modelo alcançou os valores de erro quadrático médio na faixa de 0,014 a 0,056 e R2 variando de 0,96 a 0,98. Um modelo de Rede Neural Convolucional (CNN – Convolutional Neural Network) foi adicionado ao LSTM para investigar uma potencial melhoria de desempenho. O desempenho decresceu em todos os conjuntos de dados devido à complexidade do modelo. O desempenho dos modelos também foi comparado com CNN, algoritmos tradicionais de aprendizagem máquina Support Vector Regression e Random Forest. O LSTM obteve o melhor desempenho. Por fim, investigou-se o impacto da função de perda no desempenho dos modelos propostos. O modelo com o erro quadrático médio (MSE) como função de perda teve um desempenho melhor do que o modelo com outras funções de perda. Em seguida, são investigadas as capacidades desses modelos e sua extensão, BLSTM e Bidirectional Gated Recurrent Units (BGRU) para prever os rendimentos da produção no final da campanha agrícola. Os modelos usam dados históricos, incluindo dados climáticos, calendário de rega e teor de água do solo, para estimar a produtividade no final da campanha. A aplicação desta técnica foi testada para os rendimentos de tomate e batata em um local em Portugal. A rede BLSTM superou as redes GRU, LSTM e BGRU no conjunto de dados de validação. O modelo foi capaz de captar a relação não linear entre dotação de rega, dados climáticos e teor de água do solo e prever a produtividade com um MSE variando de 0,07 a 0,27 (mm d–1)² para ETo (Evapotranspiração de Referência) e de 0,014 a 0,056 (m³m–3)² para SWC (Conteúdo de Água do Solo), com valores de R2 variando de 0,96 a 0,98. O desempenho do BLSTM no teste foi comparado com o método de aprendizagem profunda CNN, e métodos de aprendizagem máquina, incluindo um modelo Multi-Layer Perceptrons e regressão Random Forest. O BLSTM superou os outros modelos com um R2 entre 97% e 99%. Os resultados mostram que a análise de dados agrícolas com o modelo LSTM melhora o desempenho do modelo em termos de precisão. O modelo CNN obteve o segundo melhor desempenho. Portanto, o modelo de aprendizagem profunda tem uma capacidade notável de prever a produtividade no final da campanha. Além disso, uma Deep Q-Network foi treinada para programação de irrigação para a cultura do tomate. O agente foi treinado para programar a irrigação de uma plantação de tomate em Portugal. Dois modelos LSTM treinados anteriormente foram usados como ambiente de agente. Um prevê a água total no perfil do solo no dia seguinte. O outro foi empregue para estimar a produtividade com base nas condições ambientais durante uma o ciclo biológico e então medir o retorno líquido. O agente usa essas informações para decidir a quantidade de irrigação. As redes LSTM e CNN foram usadas para estimar a Q-table durante o treino. Ao contrário do modelo LSTM, a RNA e a CNN não conseguiram estimar a tabela Q, e a recompensa do agente diminuiu durante o treino. A comparação de desempenho do modelo foi realizada entre a irrigação com base fixa e a irrigação com base em um limiar. A aplicação das doses de rega preconizadas pelo modelo aumentou a produtividade em 11% e diminuiu o consumo de água em 20% a 30% em relação ao método fixo. Além disso, um modelo dentro da táctica, Advantage Actor–Critic (A2C), é foi implementado para comparar a programação de irrigação com o Deep Q-Network para a mesma cultura de tomate. Os resultados mostram que o modelo de táctica A2C reduziu o consumo de água consumo em 20% comparado ao Deep Q-Network com uma pequena mudança na recompensa líquida. Estes modelos podem ser desenvolvidos para serem aplicados a outras culturas com elevada produção em Portugal, como a fruta, cereais e vinha, que também têm grandes necessidades hídricas. Os modelos desenvolvidos ao longo desta tese podem ser reavaliados e treinados com dados históricos de outras culturas com elevada importância em Portugal, tais como frutas, cereais e uvas, que também têm elevados consumos de água. Assim, poderão ser desenvolvidos sistemas de apoio à decisão e de recomendação aos agricultores de quando e quanto irrigar. Estes sistemas poderão ajudar os agricultores a evitar o desperdício de água sem reduzir a produtividade. Esta tese visa contribuir para os passos futuros na evolução da agricultura de precisão e da robótica agrícola. Os modelos desenvolvidos ao longo desta tese são relevantes para apoiar a tomada de decisões em atividades agrícolas, direcionadas à otimização de recursos, redução de tempo e custos, e maximização da produção.Centro-01-0145-FEDER000017-EMaDeS-Energy, Materials, and Sustainable Development, co-funded by the Portugal 2020 Program (PT 2020), within the Regional Operational Program of the Center (CENTRO 2020) and the EU through the European Regional Development Fund (ERDF). Fundação para a Ciência e a Tecnologia (FCT—MCTES) also provided financial support via project UIDB/00151/2020 (C-MAST). It was also supported by the R&D Project BioDAgro – Sistema operacional inteligente de informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST - Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering of the University of Beira Interior, Covilhã, Portugal
    corecore