409 research outputs found

    Strengthening municipal governance to tackle the drivers of child malnutrition

    Get PDF

    Towards infield, live plant phenotyping using a reduced-parameter CNN

    Get PDF
    © 2019, The Author(s). There is an increase in consumption of agricultural produce as a result of the rapidly growing human population, particularly in developing nations. This has triggered high-quality plant phenotyping research to help with the breeding of high-yielding plants that can adapt to our continuously changing climate. Novel, low-cost, fully automated plant phenotyping systems, capable of infield deployment, are required to help identify quantitative plant phenotypes. The identification of quantitative plant phenotypes is a key challenge which relies heavily on the precise segmentation of plant images. Recently, the plant phenotyping community has started to use very deep convolutional neural networks (CNNs) to help tackle this fundamental problem. However, these very deep CNNs rely on some millions of model parameters and generate very large weight matrices, thus making them difficult to deploy infield on low-cost, resource-limited devices. We explore how to compress existing very deep CNNs for plant image segmentation, thus making them easily deployable infield and on mobile devices. In particular, we focus on applying these models to the pixel-wise segmentation of plants into multiple classes including background, a challenging problem in the plant phenotyping community. We combined two approaches (separable convolutions and SVD) to reduce model parameter numbers and weight matrices of these very deep CNN-based models. Using our combined method (separable convolution and SVD) reduced the weight matrix by up to 95% without affecting pixel-wise accuracy. These methods have been evaluated on two public plant datasets and one non-plant dataset to illustrate generality. We have successfully tested our models on a mobile device

    Adapting the primary-school curriculum for multigrade classes in developing countries: a five-step plan and an agenda for change

    Get PDF
    This paper draws on the findings from an international programme of research that has demonstrated the need for multigrade teachers in many developing countries to be given more support in adapting monograded curricula to the needs of their multigrade classes. It describes four empirical models of multigrade practice and examines the models of curriculum construction and child learning that inform them. It then presents an original five-step process that can be used by curriculum planners to adapt monograded curricula, taking account of the different empirical models of multigrade practice. Finally, it outlines a strategy for implementing such a process by providing further support to strengthen curriculum units and improve teacher education that may enable the experimental work that has been started to take root and have real impact on the ability of their countries to reach the Millennium Development Goals for Education by 2015

    Automated recovery of 3D models of plant shoots from multiple colour images

    Get PDF
    Increased adoption of the systems approach to biological research has focussed attention on the use of quantitative models of biological objects. This includes a need for realistic 3D representations of plant shoots for quantification and modelling. Previous limitations in single or multi-view stereo algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present a fully automatic approach to image-based 3D plant reconstruction that can be achieved using a single low-cost camera. The reconstructed plants are represented as a series of small planar sections that together model the more complex architecture of the leaf surfaces. The boundary of each leaf patch is refined using the level set method, optimising the model based on image information, curvature constraints and the position of neighbouring surfaces. The reconstruction process makes few assumptions about the nature of the plant material being reconstructed, and as such is applicable to a wide variety of plant species and topologies, and can be extended to canopy-scale imaging. We demonstrate the effectiveness of our approach on datasets of wheat and rice plants, as well as a novel virtual dataset that allows us to compute quantitative measures of reconstruction accuracy. The output is a 3D mesh structure that is suitable for modelling applications, in a format that can be imported in the majority of 3D graphics and software packages

    Convolutional Neural Net-Based Cassava Storage Root Counting Using Real and Synthetic Images

    Get PDF
    © Copyright © 2019 Atanbori, Montoya-P, Selvaraj, French and Pridmore. Cassava roots are complex structures comprising several distinct types of root. The number and size of the storage roots are two potential phenotypic traits reflecting crop yield and quality. Counting and measuring the size of cassava storage roots are usually done manually, or semi-automatically by first segmenting cassava root images. However, occlusion of both storage and fibrous roots makes the process both time-consuming and error-prone. While Convolutional Neural Nets have shown performance above the state-of-the-art in many image processing and analysis tasks, there are currently a limited number of Convolutional Neural Net-based methods for counting plant features. This is due to the limited availability of data, annotated by expert plant biologists, which represents all possible measurement outcomes. Existing works in this area either learn a direct image-to-count regressor model by regressing to a count value, or perform a count after segmenting the image. We, however, address the problem using a direct image-to-count prediction model. This is made possible by generating synthetic images, using a conditional Generative Adversarial Network (GAN), to provide training data for missing classes. We automatically form cassava storage root masks for any missing classes using existing ground-truth masks, and input them as a condition to our GAN model to generate synthetic root images. We combine the resulting synthetic images with real images to learn a direct image-to-count prediction model capable of counting the number of storage roots in real cassava images taken from a low cost aeroponic growth system. These models are used to develop a system that counts cassava storage roots in real images. Our system first predicts age group ('young' and 'old' roots; pertinent to our image capture regime) in a given image, and then, based on this prediction, selects an appropriate model to predict the number of storage roots. We achieve 91% accuracy on predicting ages of storage roots, and 86% and 71% overall percentage agreement on counting 'old' and 'young' storage roots respectively. Thus we are able to demonstrate that synthetically generated cassava root images can be used to supplement missing root classes, turning the counting problem into a direct image-to-count prediction task

    Three Dimensional Root CT Segmentation Using Multi-Resolution Encoder-Decoder Networks

    Get PDF
    © 1992-2012 IEEE. We address the complex problem of reliably segmenting root structure from soil in X-ray Computed Tomography (CT) images. We utilise a deep learning approach, and propose a state-of-the-art multi-resolution architecture based on encoder-decoders. While previous work in encoder-decoders implies the use of multiple resolutions simply by downsampling and upsampling images, we make this process explicit, with branches of the network tasked separately with obtaining local high-resolution segmentation, and wider low-resolution contextual information. The complete network is a memory efficient implementation that is still able to resolve small root detail in large volumetric images. We compare against a number of different encoder-decoder based architectures from the literature, as well as a popular existing image analysis tool designed for root CT segmentation. We show qualitatively and quantitatively that a multi-resolution approach offers substantial accuracy improvements over a both a small receptive field size in a deep network, or a larger receptive field in a shallower network. We then further improve performance using an incremental learning approach, in which failures in the original network are used to generate harder negative training examples. Our proposed method requires no user interaction, is fully automatic, and identifies large and fine root material throughout the whole volume

    Approaches to three-dimensional reconstruction of plant shoot topology and geometry

    Get PDF
    There are currently 805 million people classified as chronically undernourished, and yet the World’s population is still increasing. At the same time, global warming is causing more frequent and severe flooding and drought, thus destroying crops and reducing the amount of land available for agriculture. Recent studies show that without crop climate adaption, crop productivity will deteriorate. With access to 3D models of real plants it is possible to acquire detailed morphological and gross developmental data that can be used to study their ecophysiology, leading to an increase in crop yield and stability across hostile and changing environments. Here we review approaches to the reconstruction of 3D models of plant shoots from image data, consider current applications in plant and crop science, and identify remaining challenges. We conclude that although phenotyping is receiving an increasing amount of attention – particularly from computer vision researchers – and numerous vision approaches have been proposed, it still remains a highly interactive process. An automated system capable of producing 3D models of plants would significantly aid phenotyping practice, increasing accuracy and repeatability of measurements

    Recovering Wind-induced Plant motion in Dense Field Environments via Deep Learning and Multiple Object Tracking

    Get PDF
    Understanding the relationships between local environmental conditions and plant structure and function is critical for both fundamental science and for improving the performance of crops in field settings. Wind-induced plant motion is important in most agricultural systems, yet the complexity of the field environment means that it remained understudied. Despite the ready availability of image sequences showing plant motion, the cultivation of crop plants in dense field stands makes it difficult to detect features and characterize their general movement traits. Here, we present a robust method for characterizing motion in field-grown wheat plants (Triticum aestivum) from time-ordered sequences of red, green and blue (RGB) images. A series of crops and augmentations was applied to a dataset of 290 collected and annotated images of ear tips to increase variation and resolution when training a convolutional neural network. This approach enables wheat ears to be detected in the field without the need for camera calibration or a fixed imaging position. Videos of wheat plants moving in the wind were also collected and split into their component frames. Ear tips were detected using the trained network, then tracked between frames using a probabilistic tracking algorithm to approximate movement. These data can be used to characterize key movement traits, such as periodicity, and obtain more detailed static plant properties to assess plant structure and function in the field. Automated data extraction may be possible for informing lodging models, breeding programmes and linking movement properties to canopy light distributions and dynamic light fluctuation

    Extracting multiple interacting root systems using X-ray micro computed tomography

    Get PDF
    Root system interaction and competition for resources is an active research area that contributes to our understanding of roots’ perception and reaction to environmental conditions. Recent research has shown this complex suite of processes can now be observed in a natural environment (i.e. soil) through the use of X-ray micro Computed Tomography (µCT), which allows non-destructive analysis of plant root systems. Due to their similar X-ray attenuation coefficients and densities, the roots of different plants appear as similar greyscale intensity values in µCT image data. Unless they are manually and carefully traced, it has previously not been possible to automatically label and separate different root systems grown in the same soil environment. We present a technique, based on a visual tracking approach, which exploits knowledge of the shape of root cross-sections to automatically recover 3D descriptions of multiple, interacting root architectures growing in soil from X-ray µCT data. The method was evaluated on both simulated root data and real images of two interacting winter wheat Cordiale (Triticumaestivum L.) plants grown in a single soil column, demonstrating that it is possible to automatically segment different root systems from within the same soil sample. This work supports the automatic exploration of supportive and competitive foraging behaviour of plant root systems in natural soil environments
    • …
    corecore