287 research outputs found

    Artificial Intelligence in Sustainable Vertical Farming

    Full text link
    As global challenges of population growth, climate change, and resource scarcity intensify, the agricultural landscape is at a critical juncture. Sustainable vertical farming emerges as a transformative solution to address these challenges by maximizing crop yields in controlled environments. This paradigm shift necessitates the integration of cutting-edge technologies, with Artificial Intelligence (AI) at the forefront. The paper provides a comprehensive exploration of the role of AI in sustainable vertical farming, investigating its potential, challenges, and opportunities. The review synthesizes the current state of AI applications, encompassing machine learning, computer vision, the Internet of Things (IoT), and robotics, in optimizing resource usage, automating tasks, and enhancing decision-making. It identifies gaps in research, emphasizing the need for optimized AI models, interdisciplinary collaboration, and the development of explainable AI in agriculture. The implications extend beyond efficiency gains, considering economic viability, reduced environmental impact, and increased food security. The paper concludes by offering insights for stakeholders and suggesting avenues for future research, aiming to guide the integration of AI technologies in sustainable vertical farming for a resilient and sustainable future in agriculture

    Towards automated phenotyping in plant tissue culture

    Get PDF
    Plant in vitro culture techniques comprise important fundamental methods of modern plant research, propagation and breeding. Innovative scientific approaches to further develop the cultivation process, therefore, have the potential of far-reaching impact on many different areas. In particular, automation can increase efficiency of in vitro propagation, a domain currently con-strained by intensive manual labor. Automated phenotyping of plant in vitro culture bears the potential to extend the evaluation of in vitro plants from manual destructive endpoint measurements to continuous and objective digital quantification of plant traits. Consequently, this can lead to a better understanding of crucial developmental processes and will help to clarify the emergence of physiological disorders of plant in vitro cultures. The aim of this dissertation was to investigate and exemplify the potential of optical sensing methods and machine learning in plant in vitro culture from an interdisciplinary point of view. A novel robotic phenotyping system for automated, non-destructive, multi-dimensional in situ detection of plant traits based on low-cost sensor technology was con-ceptualized, developed and tested. Various sensor technologies, including an RGB camera, a laser distance sensor, a micro spectrometer, and a thermal camera, were applied partly for the first time under these challenging conditions and evaluated with respect to the resulting data quality and feasibility. In addition to the development of new dynamic, semi-automated data processing pipelines, the automatic acquisition of multisensory data across an entire subculture passage of plant in vitro cultures was demonstrated. This allowed novel time series images of different developmental processes of plant in vitro cultures and the emergence of physiological disorders to be captured in situ for the first time. The digital determination of relevant parameters such as projected plant area, average canopy height, and maximum plant height, was demonstrated, which can be used as critical descriptors of plant growth performance in vitro. In addition, a novel method of non-destructive quantification of media volume by depth data was developed which may allow monitoring of water uptake by plants and evaporation from the culture medium. The phenotyping system was used to investigate the etiology of the physiological growth anomaly hyperhydricity. Therefore, digital monitoring of the morphology and along with spectro-scopic studies of reflectance behavior over time were conducted. The new optical characteristics identified by classical spectral analysis, such as reduced reflectance and major absorption peaks of hyperhydricity in the SWIR region could be validated to be the main discriminating features by a trained support vector machine with a balanced accuracy of 84% on test set, demonstrating the feasibility of a spectral detection of hyperhydricity. In addition, an RGB image dataset was used for automated detection of hyperhydricity using deep neural networks. The high-performance metrics with precision of 83.8% and recall of 95.7% on test images underscore the presence of for detection sufficient number of discriminating features within the spatial RGB data, thus a second approach is proposed for automatic detection of hyperhydricity based on RGB images. The resulting multimodal sensor data sets of the robotic phenotyping system were tested as a supporting tool of an e-learning module in higher education to increase the digital skills in the field of sensing, data processing and data analysis, and evaluated by means of a student survey. This proof-of-concept study revealed an overall high level of acceptance and advocacy by students with 70% good to very good rating. However, with increased complexity of the learning task, stu-dents experienced excessive demands and rated the respective session lower. In summary, this study is expected to pave the way for increased use of automated sensor-based phenotyping in conjunction with machine learning in plant research and commercial mi-cropropagation in the future.Die pflanzliche In-vitro-Kultur umfasst wichtige grundlegende Methoden der modernen Pflanzenforschung, -vermehrung und -züchtung. Innovative wissenschaftliche Ansätze zur Wei-terentwicklung des Kultivierungsprozess können daher weitreichenden Einfluss auf viele unter-schiedliche Bereiche haben. Insbesondere die Automatisierung kann die Effizienz der In-vitro-Vermehrung steigern, die derzeit durch die intensive manuelle Arbeit beschränkt wird. Automa-tisierte Phänotypisierung von In-vitro-Kulturen ermöglicht es, die Erfassung von manuellen de-struktiven Endpunktmessungen auf eine kontinuierliche, objektive und digitale Quantifizierung der Pflanzenmerkmale auszuweiten. Dies kann zu einem besseren Verständnis entscheidender Entwicklungsprozesse führen und die Entstehung physiologischer Störungen zu klären. Ziel dieser Dissertation war es, das Potential optischer Erfassungsmethoden und des maschinellen Lernens für die pflanzliche In-vitro-Kultur unter interdisziplinären Gesichtspunk-ten zu untersuchen und exemplarisch aufzuzeigen. Ein neuartiger Phänotypisierungsroboter zur automatisierten, zerstörungsfreien, mehrdimensionalen In-situ-Erfassung von Pflanzenmerkmalen wurde auf Basis kostengünstiger Sensortechnik entwickelt. Unterschiedliche Sensortechnologien, darunter eine RGB-Kamera, ein Laser-Distanzsensor, ein Mikrospektrometer und eine Wärmebildkamera, wurden teils zum ersten Mal unter diesen schwierigen Bedingungen eingesetzt und im Hinblick auf die resultierende Datenqualität und Realisierbarkeit bewertet. Neben der Entwicklung dynamischer, halbautomatischer Datenverarbeitungspipelines, wurde die automatische Erfassung multisensorischer Daten über eine gesamte Subkulturpassage der In-vitro-Kulturen demonstriert. Dadurch konnte erstmals Zeitrafferaufnahmen verschiedener Ent-wicklungsprozesse von pflanzlichen In-vitro-Kulturen und das Auftreten von physiologischen Störungen in situ erfasst werden. Die digitale Bestimmung relevanter Kenngrößen wie der proji-zierten Pflanzenfläche, der durchschnittlichen Bestandshöhe und der maximalen Pflanzenhöhe wurde demonstriert, die als wichtige Deskriptoren für das pflanzliche Wachstum dienen können. Darüber hinaus konnte eine neue Methode für die Pflanzenwissenschaften entwickelt werden, um die Wasseraufnahme von Pflanzen und die Verdunstung von Kulturmedien auf der Grundlage einer zerstörungsfreien Quantifizierung des Medienvolumens zu überwachen. Der Phänotypisierungsroboter wurde zur Untersuchung der Entstehung der Wachs-tumsanomalie Hyperhydrizität eingesetzt. Hierfür wurden ein digitales Monitoring der Morpho-logie der Explantate mit begleitenden spektroskopischen Untersuchungen des Reflexionsverhal-tens im Zeitverlauf durchgeführt. Die durch Spektralanalyse identifizierten optischen Merkmale, wie den reduzierter Reflexionsgrad und die Hauptabsorptionspeaks der Hyperhydrizität in der SWIR-Region, konnten als die wichtigsten Unterscheidungsmerkmale durch ein Support-Vektor-Maschine-Model mit einer Genauigkeit von 84% auf dem Testsatz validiert werden und damit Machbarkeit der spektrale Identifizierung von Hyperhydrizität aufzeigen. Darüber wurde für die automatische Detektion der Hyperhydrizität auf Basis von RGB-Bildern ein neuronales Netz trainiert. Die hohen Kennzahlen im Testdatensatz wie die Präzision von 83,8 % und einem Recall von 95,7 % unterstreichen das Vorhandensein einer für die Erkennung ausreichenden Anzahl von Unterscheidungsmerkmalen innerhalb der räumlichen RGB-Daten. Somit konnte ein zweiter An-satz der automatischen Detektion von Hyperhydrizität durch RGB-Bilder präsentiert werden. Die resultierenden Sensordatensätze des Phänotypisierungsroboters wurden als unter-stützendes Werkzeug eines E-Learning Moduls zur Steigerung digitaler Kompetenzen im Bereich Sensortechnik, Datenverarbeitung und -auswertung in der Hochschulausbildung erprobt und an-hand der Befragung von Studierenden evaluiert. Diese Machbarkeitsstudie ergab eine insgesamt hohe Akzeptanz durch die Studierenden mit 70% guten bis sehr guten Bewertungen. Mit zuneh-mender Komplexität der Lernaufgabe fühlten sich die Studierenden jedoch überfordert und bewerteten die jeweilige Session schlechter. Zusammenfassend zielt diese Arbeit darauf ab den Weg für einen verstärkten Einsatz der automatisierten, sensorbasierten Phänotypisierung in Kombination mit den Techniken des ma-schinellen Lernens der Forschung und der kommerziellen Mikrovermehrung zukünftig zu ebnen.Bundesministerium für Ernährung und Landwirtschaft (BMEL)/Digitale Experimentierfelder/28DE103F18/E

    Crop plant reconstruction and feature extraction based on 3-D vision

    Get PDF
    3-D imaging is increasingly affordable and offers new possibilities for a more efficient agricul-tural practice with the use of highly advances technological devices. Some reasons contrib-uting to this possibility include the continuous increase in computer processing power, the de-crease in cost and size of electronics, the increase in solid state illumination efficiency and the need for greater knowledge and care of the individual crops. The implementation of 3-D im-aging systems in agriculture is impeded by the economic justification of using expensive de-vices for producing relative low-cost seasonal products. However, this may no longer be true since low-cost 3-D sensors, such as the one used in this work, with advance technical capabili-ties are already available. The aim of this cumulative dissertation was to develop new methodologies to reconstruct the 3-D shape of agricultural environment in order to recognized and quantitatively describe struc-tures, in this case: maize plants, for agricultural applications such as plant breeding and preci-sion farming. To fulfil this aim a comprehensive review of the 3-D imaging systems in agricul-tural applications was done to select a sensor that was affordable and has not been fully inves-tigated in agricultural environments. A low-cost TOF sensor was selected to obtain 3-D data of maize plants and a new adaptive methodology was proposed for point cloud rigid registra-tion and stitching. The resulting maize 3-D point clouds were highly dense and generated in a cost-effective manner. The validation of the methodology showed that the plants were recon-structed with high accuracies and the qualitative analysis showed the visual variability of the plants depending on the 3-D perspective view. The generated point cloud was used to obtain information about the plant parameters (stem position and plant height) in order to quantita-tively describe the plant. The resulting plant stem positions were estimated with an average mean error and standard deviation of 27 mm and 14 mm, respectively. Additionally, meaning-ful information about the plant height profile was also provided, with an average overall mean error of 8.7 mm. Since the maize plants considered in this research were highly heterogeneous in height, some of them had folded leaves and were planted with standard deviations that emulate the real performance of a seeder; it can be said that the experimental maize setup was a difficult scenario. Therefore, a better performance, for both, plant stem position and height estimation could be expected for a maize field in better conditions. Finally, having a 3-D re-construction of the maize plants using a cost-effective sensor, mounted on a small electric-motor-driven robotic platform, means that the cost (either economic, energetic or time) of gen-erating every point in the point cloud is greatly reduced compared with previous researches.Die 3D-Bilderfassung ist zunehmend kostengünstiger geworden und bietet neue Möglichkeiten für eine effizientere landwirtschaftliche Praxis durch den Einsatz hochentwickelter technologischer Geräte. Einige Gründe, die diese ermöglichen, ist das kontinuierliche Wachstum der Computerrechenleistung, die Kostenreduktion und Miniaturisierung der Elektronik, die erhöhte Beleuchtungseffizienz und die Notwendigkeit einer besseren Kenntnis und Pflege der einzelnen Pflanzen. Die Implementierung von 3-D-Sensoren in der Landwirtschaft wird durch die wirtschaftliche Rechtfertigung der Verwendung teurer Geräte zur Herstellung von kostengünstigen Saisonprodukten verhindert. Dies ist jedoch nicht mehr länger der Fall, da kostengünstige 3-D-Sensoren, bereits verfügbar sind. Wie derjenige dier in dieser Arbeit verwendet wurde. Das Ziel dieser kumulativen Dissertation war, neue Methoden für die Visualisierung die 3-D-Form der landwirtschaftlichen Umgebung zu entwickeln, um Strukturen quantitativ zu beschreiben: in diesem Fall Maispflanzen für landwirtschaftliche Anwendungen wie Pflanzenzüchtung und Precision Farming zu erkennen. Damit dieses Ziel erreicht wird, wurde eine umfassende Überprüfung der 3D-Bildgebungssysteme in landwirtschaftlichen Anwendungen durchgeführt, um einen Sensor auszuwählen, der erschwinglich und in landwirtschaftlichen Umgebungen noch nicht ausgiebig getestet wurde. Ein kostengünstiger TOF-Sensor wurde ausgewählt, um 3-D-Daten von Maispflanzen zu erhalten und eine neue adaptive Methodik wurde für die Ausrichtung von Punktwolken vorgeschlagen. Die resultierenden Mais-3-D-Punktwolken hatten eine hohe Punktedichte und waren in einer kosteneffektiven Weise erzeugt worden. Die Validierung der Methodik zeigte, dass die Pflanzen mit hoher Genauigkeit rekonstruiert wurden und die qualitative Analyse die visuelle Variabilität der Pflanzen in Abhängigkeit der 3-D-Perspektive zeigte. Die erzeugte Punktwolke wurde verwendet, um Informationen über die Pflanzenparameter (Stammposition und Pflanzenhöhe) zu erhalten, die die Pflanze quantitativ beschreibt. Die resultierenden Pflanzenstammpositionen wurden mit einem durchschnittlichen mittleren Fehler und einer Standardabweichung von 27 mm bzw. 14 mm berechnet. Zusätzlich wurden aussagekräftige Informationen zum Pflanzenhöhenprofil mit einem durchschnittlichen Gesamtfehler von 8,7 mm bereitgestellt. Da die untersuchten Maispflanzen in der Höhe sehr heterogen waren, hatten einige von ihnen gefaltete Blätter und wurden mit Standardabweichungen gepflanzt, die die tatsächliche Genauigkeit einer Sämaschine nachahmen. Man kann sagen, dass der experimentelle Versuch ein schwieriges Szenario war. Daher könnte für ein Maisfeld unter besseren Bedingungen eine besseres Resultat sowohl für die Pflanzenstammposition als auch für die Höhenschätzung erwartet werden. Schließlich bedeutet eine 3D-Rekonstruktion der Maispflanzen mit einem kostengünstigen Sensor, der auf einer kleinen elektrischen, motorbetriebenen Roboterplattform montiert ist, dass die Kosten (entweder wirtschaftlich, energetisch oder zeitlich) für die Erzeugung jedes Punktes in den Punktwolken im Vergleich zu früheren Untersuchungen stark reduziert werden

    LiDARPheno: A Low-Cost LiDAR-based 3D Scanning System for Plant Morphological Trait Characterization

    Get PDF
    The ever-growing world population brings the challenge for food security in the current world. The gene modification tools have opened a new era for fast-paced research on new crop identification and development. However, the bottleneck in the plant phenotyping technology restricts the alignment in Geno-pheno development as phenotyping is the key for the identification of potential crop for improved yield and resistance to the changing environment. Various attempts to making the plant phenotyping a “high-throughput” have been made while utilizing the existing sensors and technology. However, the demand for ‘good’ phenotypic information for linkage to the genome in understanding the gene-environment interactions is still a bottleneck in the plant phenotyping technologies. Moreover, the available technologies and instruments are inaccessible, expensive and sometimes bulky. This thesis work attempts to address some of the critical problems, such as exploration and development of a low-cost LiDAR-based platform for phenotyping the plants in-lab and in-field. A low-cost LiDAR-based system design, LiDARPheno, is introduced in this thesis work to assess the feasibility of the inexpensive LiDAR sensor in the leaf trait (length, width, and area) extraction. A detailed design of the LiDARPheno, based on low-cost and off-the-shelf components and modules, is presented. Moreover, the design of the firmware to control the hardware setup of the system and the user-level python-based script for data acquisition is proposed. The software part of the system utilizes the publicly available libraries and Application Programming Interfaces (APIs), making it easy to implement the system by a non-technical user. The LiDAR data analysis methods are presented, and algorithms for processing the data and extracting the leaf traits are developed. The processing includes conversion, cleaning/filtering, segmentation and trait extraction from the LiDAR data. Experiments on indoor plants and canola plants were performed for the development and validation of the methods for estimation of the leaf traits. The results of the LiDARPheno based trait extraction are compared with the SICK LMS400 (a commercial 2D LiDAR) to assess the performance of the developed system. Experimental results show a fair agreement between the developed system and a commercial LiDAR system. Moreover, the results are compared with the acquired ground truth as well as the commercial LiDAR system. The LiDARPheno can provide access to the inexpensive LiDAR-based scanning and open the opportunities for future exploration

    Research Trends on Greenhouse Engineering Using a Science Mapping Approach

    Get PDF
    Horticultural protected cultivation has spread throughout the world as it has proven to be extremely effective. In recent years, the greenhouse engineering research field has become one of the main research topics within greenhouse farming. The main objectives of the current study were to identify the major research topics and their trends during the last four decades by analyzing the co-occurrence network of keywords associated with greenhouse engineering publications. A total of 3804 pertinent documents published, in 1981-2021, were analyzed and discussed. China, the United States, Spain, Italy and the Netherlands have been the most active countries with more than 36% of the relevant literature. The keyword cluster analysis suggested the presence of five principal research topics: energy management and storage; monitoring and control of greenhouse climate parameters; automation of greenhouse operations through the internet of things (IoT) and wireless sensor network (WSN) applications; greenhouse covering materials and microclimate optimization in relation to plant growth; structural and functional design for improving greenhouse stability, ventilation and microclimate. Recent research trends are focused on real-time monitoring and automatic control systems based on the IoT and WSN technologies, multi-objective optimization approaches for greenhouse climate control, efficient artificial lighting and sustainable greenhouse crop cultivation using renewable energy

    LED Color Gradient As A New Screening Tool For Rapid Phenotyping Of Plant Responses To Light Quality

    Full text link
    Background The increasing demand for local food production is fueling high interest in the development of controlled environment agriculture. In particular, LED technology brings energy-saving advantages together with the possibility to manipulate plant phenotypes through light quality control. However, optimizing light quality is required for each cultivated plant and specific purpose. Findings In this paper, it is shown that the combination of LED gradient setups with imaging-based non-destructive plant phenotyping constitutes an interesting new screening tool with the potential to improve speed, logistics, and information output. To validate this concept, an experiment was performed to evaluate the effects of a complete range of Red:Blue ratios on seven plant species: Arabidopsis thaliana, Brachypodium distachyon, Euphorbia peplus, Ocimum basilicum, Oryza sativa, Solanum lycopersicum, and Setaria viridis. Plants were exposed during 30 days to the light gradient and showed significant, but species-dependent, responses in terms of dimension, shape, and color. A time series analysis of phenotypic descriptors highlighted growth changes but also transient responses of plant shapes to the Red:Blue ratio. Conclusion This approach, which generated a large reusable dataset, can be adapted for addressing specific needs in crop production or fundamental questions in photobiology.VeLire, Tropical Plant Factory (Plant'HP

    Drones in Vegetable Crops: A Systematic Literature Review

    Get PDF
    In the context of increasing global population and climate change, modern agriculture must enhance production efficiency. Vegetables production is crucial for human nutrition and has a significant environmental impact. To address this challenge, the agricultural sector needs to modernize and utilize advanced technologies such as drones to increase productivity, improve quality, and reduce resource consumption. These devices, known as Unmanned Aerial Vehicles (UAV), with their agility and versatility play a crucial role in monitoring and spraying operations. They significantly contribute to enhancing the efficacy of precision farming. The aim of this review is to examine the critical role of drones as innovative tools to enhance management and yield of vegetable crops cultivation. This review was carried out using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework and involved the analysis of a wide range of research published from 2018 to 2023. According to the phases of Identification, Screening, and Eligibility, 132 papers were selected and analysed. These papers were categorized based on the types of drone applications in vegetable crop production, providing an overview of how these tools fit into the field of Precision Farming. Technological developments of these tools and data processing methods were then explored, examining the contributions of Machine and Deep Learning and Artificial Intelligence. Final considerations were presented regarding practical implementation and future technical and scientific challenges to fully harness the potential of drones in precision agriculture and vegetable crop production. The review pointed out the significance of drone applications in vegetable crops and the immense potential of these tools in enhancing cultivation efficiency. Drone utilization enables the reduction of input quantities such as herbicides, fertilizers, pesticides, and water but also the prevention of damages through early diagnosis of various stress types. These input savings can yield environmental benefits, positioning these technologies as potential solutions for the environmental sustainability of vegetable crops
    corecore