312 research outputs found

    Benchmarking Edge Computing Devices for Grape Bunches and Trunks Detection using Accelerated Object Detection Single Shot MultiBox Deep Learning Models

    Full text link
    Purpose: Visual perception enables robots to perceive the environment. Visual data is processed using computer vision algorithms that are usually time-expensive and require powerful devices to process the visual data in real-time, which is unfeasible for open-field robots with limited energy. This work benchmarks the performance of different heterogeneous platforms for object detection in real-time. This research benchmarks three architectures: embedded GPU -- Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB, and NVIDIA Jetson TX2), TPU -- Tensor Processing Unit (such as Coral Dev Board TPU), and DPU -- Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104 Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Method: The authors used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset. After the trained model was converted and compiled for target-specific hardware formats to improve the execution efficiency. Conclusions and Results: The platforms were assessed in terms of performance of the evaluation metrics and efficiency (time of inference). Graphical Processing Units (GPUs) were the slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays (FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson TX2. TPU and GPU are the most power-efficient, consuming about 5W. The performance differences, in the evaluation metrics, across devices are irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of about 60 %

    Topological inference in graphs and images

    Get PDF

    Application of Artificial Intelligence algorithms to support decision-making in agriculture activities

    Get PDF
    Deep Learning has been successfully applied to image recognition, speech recognition, and natural language processing in recent years. Therefore, there has been an incentive to apply it in other fields as well. The field of agriculture is one of the most important in which the application of artificial intelligence algorithms, and particularly, of deep learning needs to be explored, as it has a direct impact on human well-being. In particular, there is a need to explore how deep learning models for decision-making can be used as a tool for optimal planting, land use, yield improvement, production/disease/pest control, and other activities. The vast amount of data received from sensors in smart farms makes it possible to use deep learning as a model for decision-making in this field. In agriculture, no two environments are exactly alike, which makes testing, validating, and successfully implementing such technologies much more complex than in most other sectors. Recent scientific developments in the field of deep learning, applied to agriculture, are reviewed and some challenges and potential solutions using deep learning algorithms in agriculture are discussed. Higher performance in terms of accuracy and lower inference time can be achieved, and the models can be made useful in real-world applications. Finally, some opportunities for future research in this area are suggested. The ability of artificial neural networks, specifically Long Short-Term Memory (LSTM) and Bidirectional LSTM (BLSTM), to model daily reference evapotranspiration and soil water content is investigated. The application of these techniques to predict these parameters was tested for three sites in Portugal. A single-layer BLSTM with 512 nodes was selected. Bayesian optimization was used to determine the hyperparameters, such as learning rate, decay, batch size, and dropout size. The model achieved mean square error (MSE) values ranging from 0.07 to 0.27 (mm d–1)² for ETo (Reference Evapotranspiration) and 0.014 to 0.056 (m³m–3)² for SWC (Soil Water Content), with R2 values ranging from 0.96 to 0.98. A Convolutional Neural Network (CNN) model was added to the LSTM to investigate potential performance improvement. Performance dropped in all datasets due to the complexity of the model. The performance of the models was also compared with CNN, traditional machine learning algorithms Support Vector Regression, and Random Forest. LSTM achieved the best performance. Finally, the impact of the loss function on the performance of the proposed models was investigated. The model with the mean square error (MSE) as loss function performed better than the model with other loss functions. Afterwards, the capabilities of these models and their extension, BLSTM and Bidirectional Gated Recurrent Units (BGRU) to predict end-of-season yields are investigated. The models use historical data, including climate data, irrigation scheduling, and soil water content, to estimate endof- season yield. The application of this technique was tested for tomato and potato yields at a site in Portugal. The BLSTM network outperformed the GRU, the LSTM, and the BGRU networks on the validation dataset. The model was able to capture the nonlinear relationship between irrigation amount, climate data, and soil water content and predict yield with an MSE of 0.017 to 0.039 kg/ha. The performance of the BLSTM in the test was compared with the most commonly used deep learning method called CNN, and machine learning methods including a Multi-Layer Perceptrons model and Random Forest regression. The BLSTM out-performed the other models with a R2-score between 0.97 and 0.99. The results show that analyzing agricultural data with the LSTM model improves the performance of the model in terms of accuracy. The CNN model achieved the second-best performance. Therefore, the deep learning model has a remarkable ability to predict the yield at the end of the season. Additionally, a Deep Q-Network was trained for irrigation scheduling. The agent was trained to schedule irrigation for a tomato field in Portugal. Two LSTM models trained previously were used as the agent environment. One predicts the total water in the soil profile on the next day. The other one was employed to estimate the yield based on the environmental condition during a season and then measure the net return. The agent uses this information to decide the following irrigation amount. LSTM and CNN networks were used to estimate the Q-table during training. Unlike the LSTM model, the ANN and the CNN could not estimate the Qtable, and the agent’s reward decreased during training. The comparison of the performance of the model was done with fixed-base irrigation and threshold-based irrigation. The trained model increased productivity by 11% and decreased water consumption by 20% to 30% compared to the fixed method. Also, an on-policy model, Advantage Actor–Critic (A2C), was implemented to compare irrigation scheduling with Deep Q-Network for the same tomato crop. The results show that the on-policy model A2C reduced water consumption by 20% compared to Deep Q-Network with a slight change in the net reward. These models can be developed to be applied to other cultures with high importance in Portugal, such as fruit, cereals, and grapevines, which also have large water requirements. The models developed along this thesis can be re-evaluated and trained with historical data from other cultures with high production in Portugal, such as fruits, cereals, and grapes, which also have high water demand, to create a decision support and recommendation system that tells farmers when and how much to irrigate. This system helps farmers avoid wasting water without reducing productivity. This thesis aims to contribute to the future steps in the development of precision agriculture and agricultural robotics. The models developed in this thesis are relevant to support decision-making in agricultural activities, aimed at optimizing resources, reducing time and costs, and maximizing production.Nos últimos anos, a técnica de aprendizagem profunda (Deep Learning) foi aplicada com sucesso ao reconhecimento de imagem, reconhecimento de fala e processamento de linguagem natural. Assim, tem havido um incen tivo para aplicá-la também em outros sectores. O sector agrícola é um dos mais importantes, em que a aplicação de algoritmos de inteligência artificial e, em particular, de deep learning, precisa ser explorada, pois tem impacto direto no bem-estar humano. Em particular, há uma necessidade de explorar como os modelos de aprendizagem profunda para a tomada de decisão podem ser usados como uma ferramenta para cultivo ou plantação ideal, uso da terra, melhoria da produtividade, controlo de produção, de doenças, de pragas e outras atividades. A grande quantidade de dados recebidos de sensores em explorações agrícolas inteligentes (smart farms) possibilita o uso de deep learning como modelo para tomada de decisão nesse campo. Na agricultura, não há dois ambientes iguais, o que torna o teste, a validação e a implementação bem-sucedida dessas tecnologias muito mais complexas do que na maioria dos outros setores. Desenvolvimentos científicos recentes no campo da aprendizagem profunda aplicada à agricultura, são revistos e alguns desafios e potenciais soluções usando algoritmos de aprendizagem profunda na agricultura são discutidos. Maior desempenho em termos de precisão e menor tempo de inferência pode ser alcançado, e os modelos podem ser úteis em aplicações do mundo real. Por fim, são sugeridas algumas oportunidades para futuras pesquisas nesta área. A capacidade de redes neuronais artificiais, especificamente Long Short-Term Memory (LSTM) e LSTM Bidirecional (BLSTM), para modelar a evapotranspiração de referência diária e o conteúdo de água do solo é investigada. A aplicação destas técnicas para prever estes parâmetros foi testada em três locais em Portugal. Um BLSTM de camada única com 512 nós foi selecionado. A otimização bayesiana foi usada para determinar os hiperparâmetros, como taxa de aprendizagem, decaimento, tamanho do lote e tamanho do ”dropout”. O modelo alcançou os valores de erro quadrático médio na faixa de 0,014 a 0,056 e R2 variando de 0,96 a 0,98. Um modelo de Rede Neural Convolucional (CNN – Convolutional Neural Network) foi adicionado ao LSTM para investigar uma potencial melhoria de desempenho. O desempenho decresceu em todos os conjuntos de dados devido à complexidade do modelo. O desempenho dos modelos também foi comparado com CNN, algoritmos tradicionais de aprendizagem máquina Support Vector Regression e Random Forest. O LSTM obteve o melhor desempenho. Por fim, investigou-se o impacto da função de perda no desempenho dos modelos propostos. O modelo com o erro quadrático médio (MSE) como função de perda teve um desempenho melhor do que o modelo com outras funções de perda. Em seguida, são investigadas as capacidades desses modelos e sua extensão, BLSTM e Bidirectional Gated Recurrent Units (BGRU) para prever os rendimentos da produção no final da campanha agrícola. Os modelos usam dados históricos, incluindo dados climáticos, calendário de rega e teor de água do solo, para estimar a produtividade no final da campanha. A aplicação desta técnica foi testada para os rendimentos de tomate e batata em um local em Portugal. A rede BLSTM superou as redes GRU, LSTM e BGRU no conjunto de dados de validação. O modelo foi capaz de captar a relação não linear entre dotação de rega, dados climáticos e teor de água do solo e prever a produtividade com um MSE variando de 0,07 a 0,27 (mm d–1)² para ETo (Evapotranspiração de Referência) e de 0,014 a 0,056 (m³m–3)² para SWC (Conteúdo de Água do Solo), com valores de R2 variando de 0,96 a 0,98. O desempenho do BLSTM no teste foi comparado com o método de aprendizagem profunda CNN, e métodos de aprendizagem máquina, incluindo um modelo Multi-Layer Perceptrons e regressão Random Forest. O BLSTM superou os outros modelos com um R2 entre 97% e 99%. Os resultados mostram que a análise de dados agrícolas com o modelo LSTM melhora o desempenho do modelo em termos de precisão. O modelo CNN obteve o segundo melhor desempenho. Portanto, o modelo de aprendizagem profunda tem uma capacidade notável de prever a produtividade no final da campanha. Além disso, uma Deep Q-Network foi treinada para programação de irrigação para a cultura do tomate. O agente foi treinado para programar a irrigação de uma plantação de tomate em Portugal. Dois modelos LSTM treinados anteriormente foram usados como ambiente de agente. Um prevê a água total no perfil do solo no dia seguinte. O outro foi empregue para estimar a produtividade com base nas condições ambientais durante uma o ciclo biológico e então medir o retorno líquido. O agente usa essas informações para decidir a quantidade de irrigação. As redes LSTM e CNN foram usadas para estimar a Q-table durante o treino. Ao contrário do modelo LSTM, a RNA e a CNN não conseguiram estimar a tabela Q, e a recompensa do agente diminuiu durante o treino. A comparação de desempenho do modelo foi realizada entre a irrigação com base fixa e a irrigação com base em um limiar. A aplicação das doses de rega preconizadas pelo modelo aumentou a produtividade em 11% e diminuiu o consumo de água em 20% a 30% em relação ao método fixo. Além disso, um modelo dentro da táctica, Advantage Actor–Critic (A2C), é foi implementado para comparar a programação de irrigação com o Deep Q-Network para a mesma cultura de tomate. Os resultados mostram que o modelo de táctica A2C reduziu o consumo de água consumo em 20% comparado ao Deep Q-Network com uma pequena mudança na recompensa líquida. Estes modelos podem ser desenvolvidos para serem aplicados a outras culturas com elevada produção em Portugal, como a fruta, cereais e vinha, que também têm grandes necessidades hídricas. Os modelos desenvolvidos ao longo desta tese podem ser reavaliados e treinados com dados históricos de outras culturas com elevada importância em Portugal, tais como frutas, cereais e uvas, que também têm elevados consumos de água. Assim, poderão ser desenvolvidos sistemas de apoio à decisão e de recomendação aos agricultores de quando e quanto irrigar. Estes sistemas poderão ajudar os agricultores a evitar o desperdício de água sem reduzir a produtividade. Esta tese visa contribuir para os passos futuros na evolução da agricultura de precisão e da robótica agrícola. Os modelos desenvolvidos ao longo desta tese são relevantes para apoiar a tomada de decisões em atividades agrícolas, direcionadas à otimização de recursos, redução de tempo e custos, e maximização da produção.Centro-01-0145-FEDER000017-EMaDeS-Energy, Materials, and Sustainable Development, co-funded by the Portugal 2020 Program (PT 2020), within the Regional Operational Program of the Center (CENTRO 2020) and the EU through the European Regional Development Fund (ERDF). Fundação para a Ciência e a Tecnologia (FCT—MCTES) also provided financial support via project UIDB/00151/2020 (C-MAST). It was also supported by the R&D Project BioDAgro – Sistema operacional inteligente de informação e suporte á decisão em AgroBiodiversidade, project PD20-00011, promoted by Fundação La Caixa and Fundação para a Ciência e a Tecnologia, taking place at the C-MAST - Centre for Mechanical and Aerospace Sciences and Technology, Department of Electromechanical Engineering of the University of Beira Interior, Covilhã, Portugal

    Machine Learning Models for Efficient and Robust Natural Language Processing

    Get PDF
    Natural language processing (NLP) has come of age. For example, semantic role labeling (SRL), which automatically annotates sentences with a labeled graph representing who did what to whom, has in the past ten years seen nearly 40% reduction in error, bringing it to useful accuracy. As a result, a myriad of practitioners now want to deploy NLP systems on billions of documents across many domains. However, state-of-the-art NLP systems are typically not optimized for cross-domain robustness nor computational efficiency. In this dissertation I develop machine learning methods to facilitate fast and robust inference across many common NLP tasks. First, I describe paired learning and inference algorithms for dynamic feature selection which accelerate inference in linear classifiers, the heart of the fastest NLP models, by 5-10 times. I then present iterated dilated convolutional neural networks (ID-CNNs), a distinct combination of network structure, parameter sharing and training procedures that increase inference speed by 14-20 times with accuracy matching bidirectional LSTMs, the most accurate models for NLP sequence labeling. Finally, I describe linguistically-informed self-attention (LISA), a neural network model that combines multi-head self-attention with multi-task learning to facilitate improved generalization to new domains. We show that incorporating linguistic structure in this way leads to substantial improvements over the previous state-of-the-art (syntax-free) neural network models for SRL, especially when evaluating out-of-domain. I conclude with a brief discussion of potential future directions stemming from my thesis work

    Applications of combinatorial optimization arising from large scale surveys

    Get PDF
    Many difficult statistical problems arising in censuses or in other large scale surveys have an underlying Combinatorial Optimization structure and can be solved with Combinatorial Optimization techniques. These techniques are often more efficient than the ad hoc solution techniques already developed in the field of Statistics. This thesis considers in detail two relevant cases of such statistical problems, and proposes solution approaches based on Combinatorial Optimization and Graph Theory. The first problem is the delineation of Functional Regions, the second one concerns the selection of the scope of a large survey, as briefly described below. The purpose of this work is therefore the innovative application of known techniques to very important and economically relevant practical problems that the "Censuses, Administrative and Statistical Registers Department" (DICA) of the Italian National Institute of Statistics (Istat), where I am senior researcher, has been dealing with. In several economical, statistical and geographical applications, a territory must be partitioned into Functional Regions. This operation is called Functional Regionalization. Functional Regions are areas that typically exceed administrative boundaries, and they are of interest for the evaluation of the social and economical phenomena under analysis. Functional Regions are not fixed and politically delimited, but are determined only by the interactions among all the localities of a territory. In this thesis, we focus on interactions represented by the daily journey-to-work flows between localities in which people live and/or work. Functional Regionalization of a territory often turns out to be computationally difficult, because of the size (that is, the number of localities constituting the territory under study) and the nature of the journey-to-work matrix (that is, the sparsity). In this thesis, we propose an innovative approach to Functional Regionalization based on the solution of graph partition problems over an undirected graph called transitions graph, which is generated by using the journey-to-work data. In this approach, the problem is solved by recursively partitioning the transition graph by using the min cut algorithms proposed by Stoer and Wagner and Brinkmeier. %In the second approach, the problem is solved maximizing a function of the sizes and interactions of subsets identified by successions of partitions obtained via Multilevel partitioning approach. This approach is applied to the determination of the Functional Regions for the Italian administrative regions. The target population of a statistical survey, also called scope, is the set of statistical units that should be surveyed. In the case of some large surveys or censuses, the scope cannot be the set of all available units, but it must be selected from this set. Surveying each unit has a cost and brings a different portion of the whole information. In this thesis, we focus on the case of Agricultural Census. In this case, the units are farms, and we want to determine a subset of units producing the minimum total cost and safeguarding at least a certain portion of the total information, according to the coverage levels assigned by the European regulations. Uncertainty aspects also occur, because the portion of information corresponding to each unit is not perfectly known before surveying it. The basic decision aspect is to establish the inclusion criteria before surveying each unit. We propose here to solve the described problem using multidimensional binary knapsack models
    corecore