443 research outputs found
Application of Artificial Intelligence algorithms to support decision-making in agriculture activities
Deep Learning has been successfully applied to image recognition, speech recognition, and
natural language processing in recent years. Therefore, there has been an incentive to apply
it in other fields as well. The field of agriculture is one of the most important in which the
application of artificial intelligence algorithms, and particularly, of deep learning needs to
be explored, as it has a direct impact on human well-being. In particular, there is a need
to explore how deep learning models for decision-making can be used as a tool for optimal
planting, land use, yield improvement, production/disease/pest control, and other activities.
The vast amount of data received from sensors in smart farms makes it possible to use deep
learning as a model for decision-making in this field. In agriculture, no two environments are
exactly alike, which makes testing, validating, and successfully implementing such technologies
much more complex than in most other sectors. Recent scientific developments in the
field of deep learning, applied to agriculture, are reviewed and some challenges and potential
solutions using deep learning algorithms in agriculture are discussed. Higher performance
in terms of accuracy and lower inference time can be achieved, and the models can be made
useful in real-world applications. Finally, some opportunities for future research in this area
are suggested. The ability of artificial neural networks, specifically Long Short-Term Memory
(LSTM) and Bidirectional LSTM (BLSTM), to model daily reference evapotranspiration
and soil water content is investigated. The application of these techniques to predict these
parameters was tested for three sites in Portugal. A single-layer BLSTM with 512 nodes was
selected. Bayesian optimization was used to determine the hyperparameters, such as learning
rate, decay, batch size, and dropout size. The model achieved mean square error (MSE)
values ranging from 0.07 to 0.27 (mm d–1)² for ETo (Reference Evapotranspiration) and
0.014 to 0.056 (m³m–3)² for SWC (Soil Water Content), with R2 values ranging from 0.96
to 0.98. A Convolutional Neural Network (CNN) model was added to the LSTM to investigate
potential performance improvement. Performance dropped in all datasets due to the
complexity of the model. The performance of the models was also compared with CNN, traditional
machine learning algorithms Support Vector Regression, and Random Forest. LSTM
achieved the best performance. Finally, the impact of the loss function on the performance
of the proposed models was investigated. The model with the mean square error (MSE) as
loss function performed better than the model with other loss functions. Afterwards, the
capabilities of these models and their extension, BLSTM and Bidirectional Gated Recurrent
Units (BGRU) to predict end-of-season yields are investigated. The models use historical
data, including climate data, irrigation scheduling, and soil water content, to estimate endof-
season yield. The application of this technique was tested for tomato and potato yields at a
site in Portugal. The BLSTM network outperformed the GRU, the LSTM, and the BGRU networks
on the validation dataset. The model was able to capture the nonlinear relationship
between irrigation amount, climate data, and soil water content and predict yield with an
MSE of 0.017 to 0.039 kg/ha. The performance of the BLSTM in the test was compared with
the most commonly used deep learning method called CNN, and machine learning methods
including a Multi-Layer Perceptrons model and Random Forest regression. The BLSTM out-performed the other models with a R2-score between 0.97 and 0.99. The results show that
analyzing agricultural data with the LSTM model improves the performance of the model in
terms of accuracy. The CNN model achieved the second-best performance. Therefore, the
deep learning model has a remarkable ability to predict the yield at the end of the season. Additionally,
a Deep Q-Network was trained for irrigation scheduling. The agent was trained to
schedule irrigation for a tomato field in Portugal. Two LSTM models trained previously were
used as the agent environment. One predicts the total water in the soil profile on the next
day. The other one was employed to estimate the yield based on the environmental condition
during a season and then measure the net return. The agent uses this information to decide
the following irrigation amount. LSTM and CNN networks were used to estimate the Q-table
during training. Unlike the LSTM model, the ANN and the CNN could not estimate the Qtable,
and the agent’s reward decreased during training. The comparison of the performance
of the model was done with fixed-base irrigation and threshold-based irrigation. The trained
model increased productivity by 11% and decreased water consumption by 20% to 30% compared
to the fixed method. Also, an on-policy model, Advantage Actor–Critic (A2C), was
implemented to compare irrigation scheduling with Deep Q-Network for the same tomato
crop. The results show that the on-policy model A2C reduced water consumption by 20%
compared to Deep Q-Network with a slight change in the net reward. These models can be
developed to be applied to other cultures with high importance in Portugal, such as fruit,
cereals, and grapevines, which also have large water requirements. The models developed
along this thesis can be re-evaluated and trained with historical data from other cultures with
high production in Portugal, such as fruits, cereals, and grapes, which also have high water
demand, to create a decision support and recommendation system that tells farmers when
and how much to irrigate. This system helps farmers avoid wasting water without reducing
productivity. This thesis aims to contribute to the future steps in the development of precision
agriculture and agricultural robotics. The models developed in this thesis are relevant to
support decision-making in agricultural activities, aimed at optimizing resources, reducing
time and costs, and maximizing production.Nos últimos anos, a técnica de aprendizagem profunda (Deep Learning) foi aplicada com
sucesso ao reconhecimento de imagem, reconhecimento de fala e processamento de linguagem
natural. Assim, tem havido um incen tivo para aplicá-la também em outros sectores.
O sector agrícola é um dos mais importantes, em que a aplicação de algoritmos de inteligência
artificial e, em particular, de deep learning, precisa ser explorada, pois tem impacto direto
no bem-estar humano. Em particular, há uma necessidade de explorar como os modelos de
aprendizagem profunda para a tomada de decisão podem ser usados como uma ferramenta
para cultivo ou plantação ideal, uso da terra, melhoria da produtividade, controlo de produção,
de doenças, de pragas e outras atividades. A grande quantidade de dados recebidos
de sensores em explorações agrícolas inteligentes (smart farms) possibilita o uso de deep
learning como modelo para tomada de decisão nesse campo. Na agricultura, não há dois
ambientes iguais, o que torna o teste, a validação e a implementação bem-sucedida dessas
tecnologias muito mais complexas do que na maioria dos outros setores. Desenvolvimentos
científicos recentes no campo da aprendizagem profunda aplicada à agricultura, são revistos
e alguns desafios e potenciais soluções usando algoritmos de aprendizagem profunda na agricultura
são discutidos. Maior desempenho em termos de precisão e menor tempo de inferência
pode ser alcançado, e os modelos podem ser úteis em aplicações do mundo real. Por fim,
são sugeridas algumas oportunidades para futuras pesquisas nesta área. A capacidade de redes
neuronais artificiais, especificamente Long Short-Term Memory (LSTM) e LSTM Bidirecional
(BLSTM), para modelar a evapotranspiração de referência diária e o conteúdo de água
do solo é investigada. A aplicação destas técnicas para prever estes parâmetros foi testada em
três locais em Portugal. Um BLSTM de camada única com 512 nós foi selecionado. A otimização
bayesiana foi usada para determinar os hiperparâmetros, como taxa de aprendizagem,
decaimento, tamanho do lote e tamanho do ”dropout”. O modelo alcançou os valores de erro
quadrático médio na faixa de 0,014 a 0,056 e R2 variando de 0,96 a 0,98. Um modelo de
Rede Neural Convolucional (CNN – Convolutional Neural Network) foi adicionado ao LSTM
para investigar uma potencial melhoria de desempenho. O desempenho decresceu em todos
os conjuntos de dados devido à complexidade do modelo. O desempenho dos modelos
também foi comparado com CNN, algoritmos tradicionais de aprendizagem máquina Support
Vector Regression e Random Forest. O LSTM obteve o melhor desempenho. Por fim,
investigou-se o impacto da função de perda no desempenho dos modelos propostos. O modelo
com o erro quadrático médio (MSE) como função de perda teve um desempenho melhor
do que o modelo com outras funções de perda. Em seguida, são investigadas as capacidades
desses modelos e sua extensão, BLSTM e Bidirectional Gated Recurrent Units (BGRU) para
prever os rendimentos da produção no final da campanha agrícola. Os modelos usam dados
históricos, incluindo dados climáticos, calendário de rega e teor de água do solo, para estimar
a produtividade no final da campanha. A aplicação desta técnica foi testada para os rendimentos
de tomate e batata em um local em Portugal. A rede BLSTM superou as redes GRU,
LSTM e BGRU no conjunto de dados de validação. O modelo foi capaz de captar a relação não
linear entre dotação de rega, dados climáticos e teor de água do solo e prever a produtividade com um MSE variando de 0,07 a 0,27 (mm d–1)² para ETo (Evapotranspiração de Referência)
e de 0,014 a 0,056 (m³m–3)² para SWC (Conteúdo de Água do Solo), com valores de R2
variando de 0,96 a 0,98. O desempenho do BLSTM no teste foi comparado com o método de
aprendizagem profunda CNN, e métodos de aprendizagem máquina, incluindo um modelo
Multi-Layer Perceptrons e regressão Random Forest. O BLSTM superou os outros modelos
com um R2 entre 97% e 99%. Os resultados mostram que a análise de dados agrícolas
com o modelo LSTM melhora o desempenho do modelo em termos de precisão. O modelo
CNN obteve o segundo melhor desempenho. Portanto, o modelo de aprendizagem profunda
tem uma capacidade notável de prever a produtividade no final da campanha. Além disso,
uma Deep Q-Network foi treinada para programação de irrigação para a cultura do tomate.
O agente foi treinado para programar a irrigação de uma plantação de tomate em Portugal.
Dois modelos LSTM treinados anteriormente foram usados como ambiente de agente. Um
prevê a água total no perfil do solo no dia seguinte. O outro foi empregue para estimar a produtividade
com base nas condições ambientais durante uma o ciclo biológico e então medir
o retorno líquido. O agente usa essas informações para decidir a quantidade de irrigação.
As redes LSTM e CNN foram usadas para estimar a Q-table durante o treino. Ao contrário
do modelo LSTM, a RNA e a CNN não conseguiram estimar a tabela Q, e a recompensa do
agente diminuiu durante o treino. A comparação de desempenho do modelo foi realizada
entre a irrigação com base fixa e a irrigação com base em um limiar. A aplicação das doses
de rega preconizadas pelo modelo aumentou a produtividade em 11% e diminuiu o consumo
de água em 20% a 30% em relação ao método fixo. Além disso, um modelo dentro da táctica,
Advantage Actor–Critic (A2C), é foi implementado para comparar a programação de
irrigação com o Deep Q-Network para a mesma cultura de tomate. Os resultados mostram
que o modelo de táctica A2C reduziu o consumo de água consumo em 20% comparado ao
Deep Q-Network com uma pequena mudança na recompensa líquida. Estes modelos podem
ser desenvolvidos para serem aplicados a outras culturas com elevada produção em Portugal,
como a fruta, cereais e vinha, que também têm grandes necessidades hídricas. Os modelos
desenvolvidos ao longo desta tese podem ser reavaliados e treinados com dados históricos
de outras culturas com elevada importância em Portugal, tais como frutas, cereais e uvas,
que também têm elevados consumos de água. Assim, poderão ser desenvolvidos sistemas
de apoio à decisão e de recomendação aos agricultores de quando e quanto irrigar. Estes
sistemas poderão ajudar os agricultores a evitar o desperdício de água sem reduzir a produtividade.
Esta tese visa contribuir para os passos futuros na evolução da agricultura de
precisão e da robótica agrícola. Os modelos desenvolvidos ao longo desta tese são relevantes
para apoiar a tomada de decisões em atividades agrícolas, direcionadas à otimização de recursos,
redução de tempo e custos, e maximização da produção.Centro-01-0145-FEDER000017-EMaDeS-Energy,
Materials, and Sustainable Development, co-funded by the Portugal 2020 Program (PT 2020),
within the Regional Operational Program of the Center (CENTRO 2020) and the EU through
the European Regional Development Fund (ERDF). Fundação para a Ciência e a Tecnologia
(FCT—MCTES) also provided financial support via project UIDB/00151/2020 (C-MAST).
It was also supported by the R&D Project BioDAgro – Sistema operacional inteligente de
informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by
Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST
- Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical
Engineering of the University of Beira Interior, Covilhã, Portugal
Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning
We introduce a method for following high-level navigation instructions by
mapping directly from images, instructions and pose estimates to continuous
low-level velocity commands for real-time control. The Grounded Semantic
Mapping Network (GSMN) is a fully-differentiable neural network architecture
that builds an explicit semantic map in the world reference frame by
incorporating a pinhole camera projection model within the network. The
information stored in the map is learned from experience, while the
local-to-world transformation is computed explicitly. We train the model using
DAggerFM, a modified variant of DAgger that trades tabular convergence
guarantees for improved training speed and memory use. We test GSMN in virtual
environments on a realistic quadcopter simulator and show that incorporating an
explicit mapping and grounding modules allows GSMN to outperform strong neural
baselines and almost reach an expert policy performance. Finally, we analyze
the learned map representations and show that using an explicit map leads to an
interpretable instruction-following model.Comment: To appear in Robotics: Science and Systems (RSS), 201
GM-TCNet: Gated Multi-scale Temporal Convolutional Network using Emotion Causality for Speech Emotion Recognition
In human-computer interaction, Speech Emotion Recognition (SER) plays an
essential role in understanding the user's intent and improving the interactive
experience. While similar sentimental speeches own diverse speaker
characteristics but share common antecedents and consequences, an essential
challenge for SER is how to produce robust and discriminative representations
through causality between speech emotions. In this paper, we propose a Gated
Multi-scale Temporal Convolutional Network (GM-TCNet) to construct a novel
emotional causality representation learning component with a multi-scale
receptive field. GM-TCNet deploys a novel emotional causality representation
learning component to capture the dynamics of emotion across the time domain,
constructed with dilated causal convolution layer and gating mechanism.
Besides, it utilizes skip connection fusing high-level features from different
gated convolution blocks to capture abundant and subtle emotion changes in
human speech. GM-TCNet first uses a single type of feature, mel-frequency
cepstral coefficients, as inputs and then passes them through the gated
temporal convolutional module to generate the high-level features. Finally, the
features are fed to the emotion classifier to accomplish the SER task. The
experimental results show that our model maintains the highest performance in
most cases compared to state-of-the-art techniques.Comment: The source code is available at:
https://github.com/Jiaxin-Ye/GM-TCNe
Collision Avoidance on Unmanned Aerial Vehicles using Deep Neural Networks
Unmanned Aerial Vehicles (UAVs), although hardly a new technology, have recently
gained a prominent role in many industries, being widely used not only among enthusiastic
consumers but also in high demanding professional situations, and will have a
massive societal impact over the coming years. However, the operation of UAVs is full
of serious safety risks, such as collisions with dynamic obstacles (birds, other UAVs, or
randomly thrown objects). These collision scenarios are complex to analyze in real-time,
sometimes being computationally impossible to solve with existing State of the Art (SoA)
algorithms, making the use of UAVs an operational hazard and therefore significantly reducing
their commercial applicability in urban environments. In this work, a conceptual
framework for both stand-alone and swarm (networked) UAVs is introduced, focusing on
the architectural requirements of the collision avoidance subsystem to achieve acceptable
levels of safety and reliability. First, the SoA principles for collision avoidance against
stationary objects are reviewed. Afterward, a novel image processing approach that uses
deep learning and optical flow is presented. This approach is capable of detecting and
generating escape trajectories against potential collisions with dynamic objects. Finally,
novel models and algorithms combinations were tested, providing a new approach for
the collision avoidance of UAVs using Deep Neural Networks. The feasibility of the proposed
approach was demonstrated through experimental tests using a UAV, created from
scratch using the framework developed.Os veículos aéreos não tripulados (VANTs), embora dificilmente considerados uma
nova tecnologia, ganharam recentemente um papel de destaque em muitas indústrias,
sendo amplamente utilizados não apenas por amadores, mas também em situações profissionais
de alta exigência, sendo expectável um impacto social massivo nos próximos
anos. No entanto, a operação de VANTs está repleta de sérios riscos de segurança, como
colisões com obstáculos dinâmicos (pássaros, outros VANTs ou objetos arremessados).
Estes cenários de colisão são complexos para analisar em tempo real, às vezes sendo computacionalmente
impossível de resolver com os algoritmos existentes, tornando o uso de
VANTs um risco operacional e, portanto, reduzindo significativamente a sua aplicabilidade
comercial em ambientes citadinos. Neste trabalho, uma arquitectura conceptual
para VANTs autônomos e em rede é apresentada, com foco nos requisitos arquitetônicos
do subsistema de prevenção de colisão para atingir níveis aceitáveis de segurança e confiabilidade.
Os estudos presentes na literatura para prevenção de colisão contra objectos
estacionários são revistos e uma nova abordagem é descrita. Esta tecnica usa técnicas
de aprendizagem profunda e processamento de imagem, para realizar a prevenção de
colisões em tempo real com objetos móveis. Por fim, novos modelos e combinações de algoritmos
são propostos, fornecendo uma nova abordagem para evitar colisões de VANTs
usando Redes Neurais Profundas. A viabilidade da abordagem foi demonstrada através
de testes experimentais utilizando um VANT, desenvolvido a partir da arquitectura
apresentada
Artificial intelligence and smart vision for building and construction 4.0: Machine and deep learning methods and applications
This article presents a state-of-the-art review of the applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) in building and construction industry 4.0 in the facets of architectural design and visualization; material design and optimization; structural design and analysis; offsite manufacturing and automation; construction management, progress monitoring, and safety; smart operation, building management and health monitoring; and durability, life cycle analysis, and circular economy. This paper presents a unique perspective on applications of AI/DL/ML in these domains for the complete building lifecycle, from conceptual stage, design stage, construction stage, operational and maintenance stage until the end of life. Furthermore, data collection strategies using smart vision and sensors, data cleaning methods (post-processing), data storage for developing these models are discussed, and the challenges in model development and strategies to overcome these challenges are elaborated. Future trends in these domains and possible research avenues are also presented
Discovering phase and causal dependencies on manufacturing processes
Discovering phase and causal dependencies on manufacturing processes. Keyword machine learning, causality, Industry 4.
A Review on IoT Deep Learning UAV Systems for Autonomous Obstacle Detection and Collision Avoidance
[Abstract] Advances in Unmanned Aerial Vehicles (UAVs), also known as drones, offer unprecedented opportunities to boost a wide array of large-scale Internet of Things (IoT) applications. Nevertheless, UAV platforms still face important limitations mainly related to autonomy and weight that impact their remote sensing capabilities when capturing and processing the data required for developing autonomous and robust real-time obstacle detection and avoidance systems. In this regard, Deep Learning (DL) techniques have arisen as a promising alternative for improving real-time obstacle detection and collision avoidance for highly autonomous UAVs. This article reviews the most recent developments on DL Unmanned Aerial Systems (UASs) and provides a detailed explanation on the main DL techniques. Moreover, the latest DL-UAV communication architectures are studied and their most common hardware is analyzed. Furthermore, this article enumerates the most relevant open challenges for current DL-UAV solutions, thus allowing future researchers to define a roadmap for devising the new generation affordable autonomous DL-UAV IoT solutions.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; ED431C 2016-047Xunta de Galicia; , ED431G/01Centro Singular de Investigación de Galicia; PC18/01Agencia Estatal de Investigación de España; TEC2016-75067-C4-1-
A review of the use of artificial intelligence methods in infrastructure systems
The artificial intelligence (AI) revolution offers significant opportunities to capitalise on the growth of digitalisation and has the potential to enable the ‘system of systems’ approach required in increasingly complex infrastructure systems. This paper reviews the extent to which research in economic infrastructure sectors has engaged with fields of AI, to investigate the specific AI methods chosen and the purposes to which they have been applied both within and across sectors. Machine learning is found to dominate the research in this field, with methods such as artificial neural networks, support vector machines, and random forests among the most popular. The automated reasoning technique of fuzzy logic has also seen widespread use, due to its ability to incorporate uncertainties in input variables. Across the infrastructure sectors of energy, water and wastewater, transport, and telecommunications, the main purposes to which AI has been applied are network provision, forecasting, routing, maintenance and security, and network quality management. The data-driven nature of AI offers significant flexibility, and work has been conducted across a range of network sizes and at different temporal and geographic scales. However, there remains a lack of integration of planning and policy concerns, such as stakeholder engagement and quantitative feasibility assessment, and the majority of research focuses on a specific type of infrastructure, with an absence of work beyond individual economic sectors. To enable solutions to be implemented into real-world infrastructure systems, research will need to move away from a siloed perspective and adopt a more interdisciplinary perspective that considers the increasing interconnectedness of these systems
IoT Anomaly Detection Methods and Applications: A Survey
Ongoing research on anomaly detection for the Internet of Things (IoT) is a
rapidly expanding field. This growth necessitates an examination of application
trends and current gaps. The vast majority of those publications are in areas
such as network and infrastructure security, sensor monitoring, smart home, and
smart city applications and are extending into even more sectors. Recent
advancements in the field have increased the necessity to study the many IoT
anomaly detection applications. This paper begins with a summary of the
detection methods and applications, accompanied by a discussion of the
categorization of IoT anomaly detection algorithms. We then discuss the current
publications to identify distinct application domains, examining papers chosen
based on our search criteria. The survey considers 64 papers among recent
publications published between January 2019 and July 2021. In recent
publications, we observed a shortage of IoT anomaly detection methodologies,
for example, when dealing with the integration of systems with various sensors,
data and concept drifts, and data augmentation where there is a shortage of
Ground Truth data. Finally, we discuss the present such challenges and offer
new perspectives where further research is required.Comment: 22 page
- …