362 research outputs found

    Assessing hyper parameter optimization and speedup for convolutional neural networks

    Get PDF
    The increased processing power of graphical processing units (GPUs) and the availability of large image datasets has fostered a renewed interest in extracting semantic information from images. Promising results for complex image categorization problems have been achieved using deep learning, with neural networks comprised of many layers. Convolutional neural networks (CNN) are one such architecture which provides more opportunities for image classification. Advances in CNN enable the development of training models using large labelled image datasets, but the hyper parameters need to be specified, which is challenging and complex due to the large number of parameters. A substantial amount of computational power and processing time is required to determine the optimal hyper parameters to define a model yielding good results. This article provides a survey of the hyper parameter search and optimization methods for CNN architectures

    TasselNet: Counting maize tassels in the wild via local counts regression network

    Full text link
    Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment.Comment: 14 page

    Towards infield, live plant phenotyping using a reduced-parameter CNN

    Get PDF
    © 2019, The Author(s). There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device

    Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images

    Get PDF
    © Copyright © 2019 Atanbori, Montoya-P, Selvaraj, French and Pridmore. Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task

    Leveraging Image Analysis for High-Throughput Plant Phenotyping

    Get PDF
    The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant’s phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field

    Deep Learning for Plant Stress Phenotyping: Trends and Future Perspectives

    Get PDF
    Deep learning (DL), a subset of machine learning approaches, has emerged as a versatile tool to assimilate large amounts of heterogeneous data and provide reliable predictions of complex and uncertain phenomena. These tools are increasingly being used by the plant science community to make sense of the large datasets now regularly collected via high-throughput phenotyping and genotyping. We review recent work where DL principles have been utilized for digital image–based plant stress phenotyping. We provide a comparative assessment of DL tools against other existing techniques, with respect to decision accuracy, data size requirement, and applicability in various scenarios. Finally, we outline several avenues of research leveraging current and future DL tools in plant science

    RootNav 2.0: Deep learning for automatic navigation of complex plant root architectures

    Get PDF
    © The Author(s) 2019. Published by Oxford University Press. BACKGROUND: In recent years quantitative analysis of root growth has become increasingly important as a way to explore the influence of abiotic stress such as high temperature and drought on a plant's ability to take up water and nutrients. Segmentation and feature extraction of plant roots from images presents a significant computer vision challenge. Root images contain complicated structures, variations in size, background, occlusion, clutter and variation in lighting conditions. We present a new image analysis approach that provides fully automatic extraction of complex root system architectures from a range of plant species in varied imaging set-ups. Driven by modern deep-learning approaches, RootNav 2.0 replaces previously manual and semi-automatic feature extraction with an extremely deep multi-task convolutional neural network architecture. The network also locates seeds, first order and second order root tips to drive a search algorithm seeking optimal paths throughout the image, extracting accurate architectures without user interaction. RESULTS: We develop and train a novel deep network architecture to explicitly combine local pixel information with global scene information in order to accurately segment small root features across high-resolution images. The proposed method was evaluated on images of wheat (Triticum aestivum L.) from a seedling assay. Compared with semi-automatic analysis via the original RootNav tool, the proposed method demonstrated comparable accuracy, with a 10-fold increase in speed. The network was able to adapt to different plant species via transfer learning, offering similar accuracy when transferred to an Arabidopsis thaliana plate assay. A final instance of transfer learning, to images of Brassica napus from a hydroponic assay, still demonstrated good accuracy despite many fewer training images. CONCLUSIONS: We present RootNav 2.0, a new approach to root image analysis driven by a deep neural network. The tool can be adapted to new image domains with a reduced number of images, and offers substantial speed improvements over semi-automatic and manual approaches. The tool outputs root architectures in the widely accepted RSML standard, for which numerous analysis packages exist (http://rootsystemml.github.io/), as well as segmentation masks compatible with other automated measurement tools. The tool will provide researchers with the ability to analyse root systems at larget scales than ever before, at a time when large scale genomic studies have made this more important than ever

    Towards automated phenotyping in plant tissue culture

    Get PDF
    Plant in vitro culture techniques comprise important fundamental methods of modern plant research, propagation and breeding. Innovative scientific approaches to further develop the cultivation process, therefore, have the potential of far-reaching impact on many different areas. In particular, automation can increase efficiency of in vitro propagation, a domain currently con-strained by intensive manual labor. Automated phenotyping of plant in vitro culture bears the potential to extend the evaluation of in vitro plants from manual destructive endpoint measurements to continuous and objective digital quantification of plant traits. Consequently, this can lead to a better understanding of crucial developmental processes and will help to clarify the emergence of physiological disorders of plant in vitro cultures. The aim of this dissertation was to investigate and exemplify the potential of optical sensing methods and machine learning in plant in vitro culture from an interdisciplinary point of view. A novel robotic phenotyping system for automated, non-destructive, multi-dimensional in situ detection of plant traits based on low-cost sensor technology was con-ceptualized, developed and tested. Various sensor technologies, including an RGB camera, a laser distance sensor, a micro spectrometer, and a thermal camera, were applied partly for the first time under these challenging conditions and evaluated with respect to the resulting data quality and feasibility. In addition to the development of new dynamic, semi-automated data processing pipelines, the automatic acquisition of multisensory data across an entire subculture passage of plant in vitro cultures was demonstrated. This allowed novel time series images of different developmental processes of plant in vitro cultures and the emergence of physiological disorders to be captured in situ for the first time. The digital determination of relevant parameters such as projected plant area, average canopy height, and maximum plant height, was demonstrated, which can be used as critical descriptors of plant growth performance in vitro. In addition, a novel method of non-destructive quantification of media volume by depth data was developed which may allow monitoring of water uptake by plants and evaporation from the culture medium. The phenotyping system was used to investigate the etiology of the physiological growth anomaly hyperhydricity. Therefore, digital monitoring of the morphology and along with spectro-scopic studies of reflectance behavior over time were conducted. The new optical characteristics identified by classical spectral analysis, such as reduced reflectance and major absorption peaks of hyperhydricity in the SWIR region could be validated to be the main discriminating features by a trained support vector machine with a balanced accuracy of 84% on test set, demonstrating the feasibility of a spectral detection of hyperhydricity. In addition, an RGB image dataset was used for automated detection of hyperhydricity using deep neural networks. The high-performance metrics with precision of 83.8% and recall of 95.7% on test images underscore the presence of for detection sufficient number of discriminating features within the spatial RGB data, thus a second approach is proposed for automatic detection of hyperhydricity based on RGB images. The resulting multimodal sensor data sets of the robotic phenotyping system were tested as a supporting tool of an e-learning module in higher education to increase the digital skills in the field of sensing, data processing and data analysis, and evaluated by means of a student survey. This proof-of-concept study revealed an overall high level of acceptance and advocacy by students with 70% good to very good rating. However, with increased complexity of the learning task, stu-dents experienced excessive demands and rated the respective session lower. In summary, this study is expected to pave the way for increased use of automated sensor-based phenotyping in conjunction with machine learning in plant research and commercial mi-cropropagation in the future.Die pflanzliche In-vitro-Kultur umfasst wichtige grundlegende Methoden der modernen Pflanzenforschung, -vermehrung und -züchtung. Innovative wissenschaftliche Ansätze zur Wei-terentwicklung des Kultivierungsprozess können daher weitreichenden Einfluss auf viele unter-schiedliche Bereiche haben. Insbesondere die Automatisierung kann die Effizienz der In-vitro-Vermehrung steigern, die derzeit durch die intensive manuelle Arbeit beschränkt wird. Automa-tisierte Phänotypisierung von In-vitro-Kulturen ermöglicht es, die Erfassung von manuellen de-struktiven Endpunktmessungen auf eine kontinuierliche, objektive und digitale Quantifizierung der Pflanzenmerkmale auszuweiten. Dies kann zu einem besseren Verständnis entscheidender Entwicklungsprozesse führen und die Entstehung physiologischer Störungen zu klären. Ziel dieser Dissertation war es, das Potential optischer Erfassungsmethoden und des maschinellen Lernens für die pflanzliche In-vitro-Kultur unter interdisziplinären Gesichtspunk-ten zu untersuchen und exemplarisch aufzuzeigen. Ein neuartiger Phänotypisierungsroboter zur automatisierten, zerstörungsfreien, mehrdimensionalen In-situ-Erfassung von Pflanzenmerkmalen wurde auf Basis kostengünstiger Sensortechnik entwickelt. Unterschiedliche Sensortechnologien, darunter eine RGB-Kamera, ein Laser-Distanzsensor, ein Mikrospektrometer und eine Wärmebildkamera, wurden teils zum ersten Mal unter diesen schwierigen Bedingungen eingesetzt und im Hinblick auf die resultierende Datenqualität und Realisierbarkeit bewertet. Neben der Entwicklung dynamischer, halbautomatischer Datenverarbeitungspipelines, wurde die automatische Erfassung multisensorischer Daten über eine gesamte Subkulturpassage der In-vitro-Kulturen demonstriert. Dadurch konnte erstmals Zeitrafferaufnahmen verschiedener Ent-wicklungsprozesse von pflanzlichen In-vitro-Kulturen und das Auftreten von physiologischen Störungen in situ erfasst werden. Die digitale Bestimmung relevanter Kenngrößen wie der proji-zierten Pflanzenfläche, der durchschnittlichen Bestandshöhe und der maximalen Pflanzenhöhe wurde demonstriert, die als wichtige Deskriptoren für das pflanzliche Wachstum dienen können. Darüber hinaus konnte eine neue Methode für die Pflanzenwissenschaften entwickelt werden, um die Wasseraufnahme von Pflanzen und die Verdunstung von Kulturmedien auf der Grundlage einer zerstörungsfreien Quantifizierung des Medienvolumens zu überwachen. Der Phänotypisierungsroboter wurde zur Untersuchung der Entstehung der Wachs-tumsanomalie Hyperhydrizität eingesetzt. Hierfür wurden ein digitales Monitoring der Morpho-logie der Explantate mit begleitenden spektroskopischen Untersuchungen des Reflexionsverhal-tens im Zeitverlauf durchgeführt. Die durch Spektralanalyse identifizierten optischen Merkmale, wie den reduzierter Reflexionsgrad und die Hauptabsorptionspeaks der Hyperhydrizität in der SWIR-Region, konnten als die wichtigsten Unterscheidungsmerkmale durch ein Support-Vektor-Maschine-Model mit einer Genauigkeit von 84% auf dem Testsatz validiert werden und damit Machbarkeit der spektrale Identifizierung von Hyperhydrizität aufzeigen. Darüber wurde für die automatische Detektion der Hyperhydrizität auf Basis von RGB-Bildern ein neuronales Netz trainiert. Die hohen Kennzahlen im Testdatensatz wie die Präzision von 83,8 % und einem Recall von 95,7 % unterstreichen das Vorhandensein einer für die Erkennung ausreichenden Anzahl von Unterscheidungsmerkmalen innerhalb der räumlichen RGB-Daten. Somit konnte ein zweiter An-satz der automatischen Detektion von Hyperhydrizität durch RGB-Bilder präsentiert werden. Die resultierenden Sensordatensätze des Phänotypisierungsroboters wurden als unter-stützendes Werkzeug eines E-Learning Moduls zur Steigerung digitaler Kompetenzen im Bereich Sensortechnik, Datenverarbeitung und -auswertung in der Hochschulausbildung erprobt und an-hand der Befragung von Studierenden evaluiert. Diese Machbarkeitsstudie ergab eine insgesamt hohe Akzeptanz durch die Studierenden mit 70% guten bis sehr guten Bewertungen. Mit zuneh-mender Komplexität der Lernaufgabe fühlten sich die Studierenden jedoch überfordert und bewerteten die jeweilige Session schlechter. Zusammenfassend zielt diese Arbeit darauf ab den Weg für einen verstärkten Einsatz der automatisierten, sensorbasierten Phänotypisierung in Kombination mit den Techniken des ma-schinellen Lernens der Forschung und der kommerziellen Mikrovermehrung zukünftig zu ebnen.Bundesministerium für Ernährung und Landwirtschaft (BMEL)/Digitale Experimentierfelder/28DE103F18/E
    corecore