95 research outputs found

    Fruit Detection and Tree Segmentation for Yield Mapping in Orchards

    Get PDF
    Accurate information gathering and processing is critical for precision horticulture, as growers aim to optimise their farm management practices. An accurate inventory of the crop that details its spatial distribution along with health and maturity, can help farmers efficiently target processes such as chemical and fertiliser spraying, crop thinning, harvest management, labour planning and marketing. Growers have traditionally obtained this information by using manual sampling techniques, which tend to be labour intensive, spatially sparse, expensive, inaccurate and prone to subjective biases. Recent advances in sensing and automation for field robotics allow for key measurements to be made for individual plants throughout an orchard in a timely and accurate manner. Farmer operated machines or unmanned robotic platforms can be equipped with a range of sensors to capture a detailed representation over large areas. Robust and accurate data processing techniques are therefore required to extract high level information needed by the grower to support precision farming. This thesis focuses on yield mapping in orchards using image and light detection and ranging (LiDAR) data captured using an unmanned ground vehicle (UGV). The contribution is the framework and algorithmic components for orchard mapping and yield estimation that is applicable to different fruit types and orchard configurations. The framework includes detection of fruits in individual images and tracking them over subsequent frames. The fruit counts are then associated to individual trees, which are segmented from image and LiDAR data, resulting in a structured spatial representation of yield. The first contribution of this thesis is the development of a generic and robust fruit detection algorithm. Images captured in the outdoor environment are susceptible to highly variable external factors that lead to significant appearance variations. Specifically in orchards, variability is caused by changes in illumination, target pose, tree types, etc. The proposed techniques address these issues by using state-of-the-art feature learning approaches for image classification, while investigating the utility of orchard domain knowledge for fruit detection. Detection is performed using both pixel-wise classification of images followed instance segmentation, and bounding-box regression approaches. The experimental results illustrate the versatility of complex deep learning approaches over a multitude of fruit types. The second contribution of this thesis is a tree segmentation approach to detect the individual trees that serve as a standard unit for structured orchard information systems. The work focuses on trellised trees, which present unique challenges for segmentation algorithms due to their intertwined nature. LiDAR data are used to segment the trellis face, and to generate proposals for individual trees trunks. Additional trunk proposals are provided using pixel-wise classification of the image data. The multi-modal observations are fine-tuned by modelling trunk locations using a hidden semi-Markov model (HSMM), within which prior knowledge of tree spacing is incorporated. The final component of this thesis addresses the visual occlusion of fruit within geometrically complex canopies by using a multi-view detection and tracking approach. Single image fruit detections are tracked over a sequence of images, and associated to individual trees or farm rows, with the spatial distribution of the fruit counting forming a yield map over the farm. The results show the advantage of using multi-view imagery (instead of single view analysis) for fruit counting and yield mapping. This thesis includes extensive experimentation in almond, apple and mango orchards, with data captured by a UGV spanning a total of 5 hectares of farm area, over 30 km of vehicle traversal and more than 7,000 trees. The validation of the different processes is performed using manual annotations, which includes fruit and tree locations in image and LiDAR data respectively. Additional evaluation of yield mapping is performed by comparison against fruit counts on trees at the farm and counts made by the growers post-harvest. The framework developed in this thesis is demonstrated to be accurate compared to ground truth at all scales of the pipeline, including fruit detection and tree mapping, leading to accurate yield estimation, per tree and per row, for the different crops. Through the multitude of field experiments conducted over multiple seasons and years, the thesis presents key practical insights necessary for commercial development of an information gathering system in orchards

    Precision Agriculture Technology for Crop Farming

    Get PDF
    This book provides a review of precision agriculture technology development, followed by a presentation of the state-of-the-art and future requirements of precision agriculture technology. It presents different styles of precision agriculture technologies suitable for large scale mechanized farming; highly automated community-based mechanized production; and fully mechanized farming practices commonly seen in emerging economic regions. The book emphasizes the introduction of core technical features of sensing, data processing and interpretation technologies, crop modeling and production control theory, intelligent machinery and field robots for precision agriculture production

    Crop plant reconstruction and feature extraction based on 3-D vision

    Get PDF
    3-D imaging is increasingly affordable and offers new possibilities for a more efficient agricul-tural practice with the use of highly advances technological devices. Some reasons contrib-uting to this possibility include the continuous increase in computer processing power, the de-crease in cost and size of electronics, the increase in solid state illumination efficiency and the need for greater knowledge and care of the individual crops. The implementation of 3-D im-aging systems in agriculture is impeded by the economic justification of using expensive de-vices for producing relative low-cost seasonal products. However, this may no longer be true since low-cost 3-D sensors, such as the one used in this work, with advance technical capabili-ties are already available. The aim of this cumulative dissertation was to develop new methodologies to reconstruct the 3-D shape of agricultural environment in order to recognized and quantitatively describe struc-tures, in this case: maize plants, for agricultural applications such as plant breeding and preci-sion farming. To fulfil this aim a comprehensive review of the 3-D imaging systems in agricul-tural applications was done to select a sensor that was affordable and has not been fully inves-tigated in agricultural environments. A low-cost TOF sensor was selected to obtain 3-D data of maize plants and a new adaptive methodology was proposed for point cloud rigid registra-tion and stitching. The resulting maize 3-D point clouds were highly dense and generated in a cost-effective manner. The validation of the methodology showed that the plants were recon-structed with high accuracies and the qualitative analysis showed the visual variability of the plants depending on the 3-D perspective view. The generated point cloud was used to obtain information about the plant parameters (stem position and plant height) in order to quantita-tively describe the plant. The resulting plant stem positions were estimated with an average mean error and standard deviation of 27 mm and 14 mm, respectively. Additionally, meaning-ful information about the plant height profile was also provided, with an average overall mean error of 8.7 mm. Since the maize plants considered in this research were highly heterogeneous in height, some of them had folded leaves and were planted with standard deviations that emulate the real performance of a seeder; it can be said that the experimental maize setup was a difficult scenario. Therefore, a better performance, for both, plant stem position and height estimation could be expected for a maize field in better conditions. Finally, having a 3-D re-construction of the maize plants using a cost-effective sensor, mounted on a small electric-motor-driven robotic platform, means that the cost (either economic, energetic or time) of gen-erating every point in the point cloud is greatly reduced compared with previous researches.Die 3D-Bilderfassung ist zunehmend kostengĂŒnstiger geworden und bietet neue Möglichkeiten fĂŒr eine effizientere landwirtschaftliche Praxis durch den Einsatz hochentwickelter technologischer GerĂ€te. Einige GrĂŒnde, die diese ermöglichen, ist das kontinuierliche Wachstum der Computerrechenleistung, die Kostenreduktion und Miniaturisierung der Elektronik, die erhöhte Beleuchtungseffizienz und die Notwendigkeit einer besseren Kenntnis und Pflege der einzelnen Pflanzen. Die Implementierung von 3-D-Sensoren in der Landwirtschaft wird durch die wirtschaftliche Rechtfertigung der Verwendung teurer GerĂ€te zur Herstellung von kostengĂŒnstigen Saisonprodukten verhindert. Dies ist jedoch nicht mehr lĂ€nger der Fall, da kostengĂŒnstige 3-D-Sensoren, bereits verfĂŒgbar sind. Wie derjenige dier in dieser Arbeit verwendet wurde. Das Ziel dieser kumulativen Dissertation war, neue Methoden fĂŒr die Visualisierung die 3-D-Form der landwirtschaftlichen Umgebung zu entwickeln, um Strukturen quantitativ zu beschreiben: in diesem Fall Maispflanzen fĂŒr landwirtschaftliche Anwendungen wie PflanzenzĂŒchtung und Precision Farming zu erkennen. Damit dieses Ziel erreicht wird, wurde eine umfassende ÜberprĂŒfung der 3D-Bildgebungssysteme in landwirtschaftlichen Anwendungen durchgefĂŒhrt, um einen Sensor auszuwĂ€hlen, der erschwinglich und in landwirtschaftlichen Umgebungen noch nicht ausgiebig getestet wurde. Ein kostengĂŒnstiger TOF-Sensor wurde ausgewĂ€hlt, um 3-D-Daten von Maispflanzen zu erhalten und eine neue adaptive Methodik wurde fĂŒr die Ausrichtung von Punktwolken vorgeschlagen. Die resultierenden Mais-3-D-Punktwolken hatten eine hohe Punktedichte und waren in einer kosteneffektiven Weise erzeugt worden. Die Validierung der Methodik zeigte, dass die Pflanzen mit hoher Genauigkeit rekonstruiert wurden und die qualitative Analyse die visuelle VariabilitĂ€t der Pflanzen in AbhĂ€ngigkeit der 3-D-Perspektive zeigte. Die erzeugte Punktwolke wurde verwendet, um Informationen ĂŒber die Pflanzenparameter (Stammposition und Pflanzenhöhe) zu erhalten, die die Pflanze quantitativ beschreibt. Die resultierenden Pflanzenstammpositionen wurden mit einem durchschnittlichen mittleren Fehler und einer Standardabweichung von 27 mm bzw. 14 mm berechnet. ZusĂ€tzlich wurden aussagekrĂ€ftige Informationen zum Pflanzenhöhenprofil mit einem durchschnittlichen Gesamtfehler von 8,7 mm bereitgestellt. Da die untersuchten Maispflanzen in der Höhe sehr heterogen waren, hatten einige von ihnen gefaltete BlĂ€tter und wurden mit Standardabweichungen gepflanzt, die die tatsĂ€chliche Genauigkeit einer SĂ€maschine nachahmen. Man kann sagen, dass der experimentelle Versuch ein schwieriges Szenario war. Daher könnte fĂŒr ein Maisfeld unter besseren Bedingungen eine besseres Resultat sowohl fĂŒr die Pflanzenstammposition als auch fĂŒr die HöhenschĂ€tzung erwartet werden. Schließlich bedeutet eine 3D-Rekonstruktion der Maispflanzen mit einem kostengĂŒnstigen Sensor, der auf einer kleinen elektrischen, motorbetriebenen Roboterplattform montiert ist, dass die Kosten (entweder wirtschaftlich, energetisch oder zeitlich) fĂŒr die Erzeugung jedes Punktes in den Punktwolken im Vergleich zu frĂŒheren Untersuchungen stark reduziert werden

    Improving the maize crop row navigation line recognition method of YOLOX

    Get PDF
    The accurate identification of maize crop row navigation lines is crucial for the navigation of intelligent weeding machinery, yet it faces significant challenges due to lighting variations and complex environments. This study proposes an optimized version of the YOLOX-Tiny single-stage detection network model for accurately identifying maize crop row navigation lines. It incorporates adaptive illumination adjustment and multi-scale prediction to enhance dense target detection. Visual attention mechanisms, including Efficient Channel Attention and Cooperative Attention modules, are introduced to better extract maize features. A Fast Spatial Pyramid Pooling module is incorporated to improve target localization accuracy. The Coordinate Intersection over Union loss function is used to further enhance detection accuracy. Experimental results demonstrate that the improved YOLOX-Tiny model achieves an average precision of 92.2 %, with a detection time of 15.6 milliseconds. This represents a 16.4 % improvement over the original model while maintaining high accuracy. The proposed model has a reduced size of 18.6 MB, representing a 7.1 % reduction. It also incorporates the least squares method for accurately fitting crop rows. The model showcases efficiency in processing large amounts of data, achieving a comprehensive fitting time of 42 milliseconds and an average angular error of 0.59°. The improved YOLOX-Tiny model offers substantial support for the navigation of intelligent weeding machinery in practical applications, contributing to increased agricultural productivity and reduced usage of chemical herbicides

    Precision Agriculture Technology for Crop Farming

    Get PDF
    This book provides a review of precision agriculture technology development, followed by a presentation of the state-of-the-art and future requirements of precision agriculture technology. It presents different styles of precision agriculture technologies suitable for large scale mechanized farming; highly automated community-based mechanized production; and fully mechanized farming practices commonly seen in emerging economic regions. The book emphasizes the introduction of core technical features of sensing, data processing and interpretation technologies, crop modeling and production control theory, intelligent machinery and field robots for precision agriculture production

    Hyperspectral Imaging from Ground Based Mobile Platforms and Applications in Precision Agriculture

    Get PDF
    This thesis focuses on the use of line scanning hyperspectral sensors on mobile ground based platforms and applying them to agricultural applications. First this work deals with the geometric and radiometric calibration and correction of acquired hyperspectral data. When operating at low altitudes, changing lighting conditions are common and inevitable, complicating the retrieval of a surface's reflectance, which is solely a function of its physical structure and chemical composition. Therefore, this thesis contributes the evaluation of an approach to compensate for changes in illumination and obtain reflectance that is less labour intensive than traditional empirical methods. Convenient field protocols are produced that only require a representative set of illumination and reflectance spectral samples. In addition, a method for determining a line scanning camera's rigid 6 degree of freedom (DOF) offset and uncertainty with respect to a navigation system is developed, enabling accurate georegistration and sensor fusion. The thesis then applies the data captured from the platform to two different agricultural applications. The first is a self-supervised weed detection framework that allows training of a per-pixel classifier using hyperspectral data without manual labelling. The experiments support the effectiveness of the framework, rivalling classifiers trained on hand labelled training data. Then the thesis demonstrates the mapping of mango maturity using hyperspectral data on an orchard wide scale using efficient image scanning techniques, which is a world first result. A novel classification, regression and mapping pipeline is proposed to generate per tree mango maturity averages. The results confirm that maturity prediction in mango orchards is possible in natural daylight using a hyperspectral camera, despite complex micro-illumination-climates under the canopy

    Proceeding Of Mechanical Engineering Research Day 2016 (MERD’16)

    Get PDF
    This Open Access e-Proceeding contains a compilation of 105 selected papers from the Mechanical Engineering Research Day 2016 (MERD’16) event, which is held in Kampus Teknologi, Universiti Teknikal Malaysia Melaka (UTeM) - Melaka, Malaysia, on 31 March 2016. The theme chosen for this event is ‘IDEA. INSPIRE. INNOVATE’. It was gratifying to all of us when the response for MERD’16 is overwhelming as the technical committees received more than 200 submissions from various areas of mechanical engineering. After a peer-review process, the editors have accepted 105 papers for the e-proceeding that cover 7 main themes. This open access e-Proceeding can be viewed or downloaded at www3.utem.edu.my/care/proceedings. We hope that these proceeding will serve as a valuable reference for researchers. With the large number of submissions from the researchers in other faculties, the event has achieved its main objective which is to bring together educators, researchers and practitioners to share their findings and perhaps sustaining the research culture in the university. The topics of MERD’16 are based on a combination of fundamental researches, advanced research methodologies and application technologies. As the editor-in-chief, we would like to express our gratitude to the editorial board and fellow review members for their tireless effort in compiling and reviewing the selected papers for this proceeding. We would also like to extend our great appreciation to the members of the Publication Committee and Secretariat for their excellent cooperation in preparing the proceeding of MERD’16

    Proceeding Of Mechanical Engineering Research Day 2015 (MERD’15)

    Get PDF
    This Open Access e-Proceeding contains 74 selected papers from the Mechanical Engineering Research Day 2015 (MERD’15) event, which is held in Kampus Teknologi, Universiti Teknikal Malaysia Melaka (UTeM) - Melaka, Malaysia, on 31 March 2015. The theme chosen for this event is ‘Pioneering Future Discovery’. The response for MERD’15 is overwhelming as the technical committees have received more than 90 papers from various areas of mechanical engineering. From the total number of submissions, the technical committees have selected 74 papers to be included in this proceeding. The selected papers are grouped into 12 categories: Advanced Materials Processing; Automotive Engineering; Computational Modeling and Analysis & CAD/CAE; Energy Management & Fuels and Lubricants; Hydraulics and Pneumatics & Mechanical Control; Mechanical Design and Optimization; Noise, Vibration and Harshness; Non-Destructive Testing & Structural Mechanics; Surface Engineering and Coatings; Others Related Topic. With the large number of submissions from the researchers in other faculties, the event has achieved its main objective which is to bring together educators, researchers and practitioners to share their findings and perhaps sustaining the research culture in the university. The topics of MERD’15 are based on a combination of advanced research methodologies, application technologies and review approaches. As the editor-in-chief, we would like to express our gratitude to the editorial board members for their tireless effort in compiling and reviewing the selected papers for this proceeding. We would also like to extend our great appreciation to the members of the Publication Committee and Secretariat for their excellent cooperation in preparing the proceedings of MERD’15

    The 1st Advanced Manufacturing Student Conference (AMSC21) Chemnitz, Germany 15–16 July 2021

    Get PDF
    The Advanced Manufacturing Student Conference (AMSC) represents an educational format designed to foster the acquisition and application of skills related to Research Methods in Engineering Sciences. Participating students are required to write and submit a conference paper and are given the opportunity to present their findings at the conference. The AMSC provides a tremendous opportunity for participants to practice critical skills associated with scientific publication. Conference Proceedings of the conference will benefit readers by providing updates on critical topics and recent progress in the advanced manufacturing engineering and technologies and, at the same time, will aid the transfer of valuable knowledge to the next generation of academics and practitioners. *** The first AMSC Conference Proceeding (AMSC21) addressed the following topics: Advances in “classical” Manufacturing Technologies, Technology and Application of Additive Manufacturing, Digitalization of Industrial Production (Industry 4.0), Advances in the field of Cyber-Physical Systems, Virtual and Augmented Reality Technologies throughout the entire product Life Cycle, Human-machine-environment interaction and Management and life cycle assessment.:- Advances in “classical” Manufacturing Technologies - Technology and Application of Additive Manufacturing - Digitalization of Industrial Production (Industry 4.0) - Advances in the field of Cyber-Physical Systems - Virtual and Augmented Reality Technologies throughout the entire product Life Cycle - Human-machine-environment interaction - Management and life cycle assessmen

    HIERARCHICAL LEARNING OF DISCRIMINATIVE FEATURES AND CLASSIFIERS FOR LARGE-SCALE VISUAL RECOGNITION

    Get PDF
    Enabling computers to recognize objects present in images has been a long standing but tremendously challenging problem in the field of computer vision for decades. Beyond the difficulties resulting from huge appearance variations, large-scale visual recognition poses unprecedented challenges when the number of visual categories being considered becomes thousands, and the amount of images increases to millions. This dissertation contributes to addressing a number of the challenging issues in large-scale visual recognition. First, we develop an automatic image-text alignment method to collect massive amounts of labeled images from the Web for training visual concept classifiers. Specif- ically, we first crawl a large number of cross-media Web pages containing Web images and their auxiliary texts, and then segment them into a collection of image-text pairs. We then show that near-duplicate image clustering according to visual similarity can significantly reduce the uncertainty on the relatedness of Web images’ semantics to their auxiliary text terms or phrases. Finally, we empirically demonstrate that ran- dom walk over a newly proposed phrase correlation network can help to achieve more precise image-text alignment by refining the relevance scores between Web images and their auxiliary text terms. Second, we propose a visual tree model to reduce the computational complexity of a large-scale visual recognition system by hierarchically organizing and learning the classifiers for a large number of visual categories in a tree structure. Compared to previous tree models, such as the label tree, our visual tree model does not require training a huge amount of classifiers in advance which is computationally expensive. However, we experimentally show that the proposed visual tree achieves results that are comparable or even better to other tree models in terms of recognition accuracy and efficiency. Third, we present a joint dictionary learning (JDL) algorithm which exploits the inter-category visual correlations to learn more discriminative dictionaries for image content representation. Given a group of visually correlated categories, JDL simul- taneously learns one common dictionary and multiple category-specific dictionaries to explicitly separate the shared visual atoms from the category-specific ones. We accordingly develop three classification schemes to make full use of the dictionaries learned by JDL for visual content representation in the task of image categoriza- tion. Experiments on two image data sets which respectively contain 17 and 1,000 categories demonstrate the effectiveness of the proposed algorithm. In the last part of the dissertation, we develop a novel data-driven algorithm to quantitatively characterize the semantic gaps of different visual concepts for learning complexity estimation and inference model selection. The semantic gaps are estimated directly in the visual feature space since the visual feature space is the common space for concept classifier training and automatic concept detection. We show that the quantitative characterization of the semantic gaps helps to automatically select more effective inference models for classifier training, which further improves the recognition accuracy rates
    • 

    corecore