14,070 research outputs found

    Intelligent Agricultural Machinery Using Deep Learning

    Full text link
    Artificial intelligence, deep learning, big data, self-driving cars, these are words that have become familiar to most people and have captured the imagination of the public and have brought hopes as well as fears. We have been told that artificial intelligence will be a major part of our lives, and almost all of us witness this when decisions made by algorithms show us commercial advertisements that specifically target our interests while using the web. In this paper, the conversation around artificial intelligence focuses on a particular application, agricultural machinery, but offers enough content so that the reader can have a very good idea on how to consider this technology for not only other agricultural applications such as sorting and grading produce, but also other areas in which this technology can be a part of a system that includes sensors, hardware and software that can make accurate decisions. Narrowing the application and also focusing on one specific artificial intelligence approach, that of deep learning, allow us to illustrate from start to end the steps that are usually considered and elaborate on recent developments on artificial intelligence

    Detecção de linha de plantio de cana de açúcar a partir de imagens de VANT usando Segmentação Semântica e Transformada de Radon

    Get PDF
    In recent years, UAVs (Unmanned Aerial Vehicles) have become increasingly popular in the agricultural sector, promoting and enabling the application of aerial image monitoring in both scientific and business contexts. Images captured by UAVs are fundamental for precision farming practices, as they allow activities that deal with low and medium altitude images. After the effective sowing, the scenario of the planted area may change drastically over time due to the appearance of erosion, gaps, death and drying of part of the crop, animal interventions, etc. Thus, the process of detecting the crop rows is strongly important for planning the harvest, estimating the use of inputs, control of costs of production, plant stand counts, early correction of sowing failures, more-efficient watering, etc. In addition, the geolocation information of the detected lines allows the use of autonomous machinery and a better application of inputs, reducing financial costs and the aggression to the environment. In this work we address the problem of detection and segmentation of sugarcane crop lines using UAV imagery. First, we experimented an approach based on \ac{GA} associated with Otsu method to produce binarized images. Then, due to some reasons including the recent relevance of Semantic Segmentation in the literature, its levels of abstraction, and the non-feasible results of Otsu associated with \ac{GA}, we proposed a new approach based on \ac{SSN} divided in two steps. First, we use a Convolutional Neural Network (CNN) to automatically segment the images, classifying their regions as crop lines or as non-planted soil. Then, we use the Radon transform to reconstruct and improve the already segmented lines, making them more uniform or grouping fragments of lines and loose plants belonging to the same planting line. We compare our results with segmentation performed manually by experts and the results demonstrate the efficiency and feasibility of our approach to the proposed task.Dissertação (Mestrado)Nos últimos anos, os VANTs (Veículos Aéreos Não Tripulados) têm se tornado cada vez mais populares no setor agrícola, promovendo e possibilitando o monitoramento de imagens aéreas tanto no contexto científico, quanto no de negócios. Imagens capturadas por VANTs são fundamentais para práticas de agricultura de precisão, pois permitem a realização de atividades que lidam com imagens de baixa ou média altitude. O cenário da área plantada pode mudar drasticamente ao longo do tempo devido ao aparecimento de erosões, falhas de plantio, morte e ressecamento de parte da cultura, intervenções de animais, etc. Assim, o processo de detecção das linhas de plantio é de grande importância para o planejamento da colheita, controle de custos de produção, contagem de plantas, correção de falhas de semeadura, irrigação eficiente, entre outros. Além disso, a informação de geolocalização das linhas detectadas permite o uso de maquinários autônomos e um melhor planejamento de aplicação de insumos, reduzindo custos e a agressão ao meio ambiente. Neste trabalho, abordamos o problema de segmentação e detecção de linhas de plantio de cana-de-açúcar em imagens de VANTs. Primeiro, experimentamos uma abordagem baseada em Algoritmo Genético (AG) e Otsu para produzir imagens binarizadas. Posteriormente, devido a alguns motivos, incluindo a relevância recente da Segmentação Semântica, seus níveis de abstração e os resultados inviáveis obtidos com AG, estudamos e propusemos uma nova abordagem baseada em \ac{SSN} em duas etapas. Primeiro, usamos uma \ac{SSN} para segmentar as imagens, classificando suas regiões como linhas de plantio ou como solo não plantado. Em seguida, utilizamos a transformada de Radon para reconstruir e melhorar as linhas já segmentadas, tornando-as mais uniformes ou agrupando fragmentos de linhas e plantas soltas. Comparamos nossos resultados com segmentações feitas manualmente por especialistas e os resultados demonstram a eficiência e a viabilidade de nossa abordagem para a tarefa proposta

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    Agricultural Robot for Intelligent Detection of Pyralidae Insects

    Get PDF
    The Pyralidae insects are one of the main pests in economic crops. However, the manual detection and identification of Pyralidae insects are labor intensive and inefficient, and subjective factors can influence recognition accuracy. To address these shortcomings, an insect monitoring robot and a new method to recognize the Pyralidae insects are presented in this chapter. Firstly, the robot gets images by performing a fixed action and detects whether there are Pyralidae insects in the images. The recognition method obtains the total probability image by using reverse mapping of histogram and multi-template images, and then image contour can be extracted quickly and accurately by using constraint Otsu. Finally, according to the Hu moment characters, perimeter, and area characters, the contours can be filtrated, and recognition results with triangle mark can be obtained. According to the recognition results, the speed of the robot car and mechanical arm can be adjusted adaptively. The theoretical analysis and experimental results show that the proposed scheme has high timeliness and high recognition accuracy in the natural planting scene

    Boosting precision crop protection towards agriculture 5.0 via machine learning and emerging technologies: A contextual review

    Get PDF
    Crop protection is a key activity for the sustainability and feasibility of agriculture in a current context of climate change, which is causing the destabilization of agricultural practices and an increase in the incidence of current or invasive pests, and a growing world population that requires guaranteeing the food supply chain and ensuring food security. In view of these events, this article provides a contextual review in six sections on the role of artificial intelligence (AI), machine learning (ML) and other emerging technologies to solve current and future challenges of crop protection. Over time, crop protection has progressed from a primitive agriculture 1.0 (Ag1.0) through various technological developments to reach a level of maturity closelyin line with Ag5.0 (section 1), which is characterized by successfully leveraging ML capacity and modern agricultural devices and machines that perceive, analyze and actuate following the main stages of precision crop protection (section 2). Section 3 presents a taxonomy of ML algorithms that support the development and implementation of precision crop protection, while section 4 analyses the scientific impact of ML on the basis of an extensive bibliometric study of >120 algorithms, outlining the most widely used ML and deep learning (DL) techniques currently applied in relevant case studies on the detection and control of crop diseases, weeds and plagues. Section 5 describes 39 emerging technologies in the fields of smart sensors and other advanced hardware devices, telecommunications, proximal and remote sensing, and AI-based robotics that will foreseeably lead the next generation of perception-based, decision-making and actuation systems for digitized, smart and real-time crop protection in a realistic Ag5.0. Finally, section 6 highlights the main conclusions and final remarks
    corecore