163 research outputs found

    An interactive tool for semi-automated leaf annotation

    Get PDF
    High throughput plant phenotyping is emerging as a necessary step towards meeting agricultural demands of the future. Central to its success is the development of robust computer vision algorithms that analyze images and extract phenotyping information to be associated with genotypes and environmental conditions for identifying traits suitable for further development. Obtaining leaf level quantitative data is important towards understanding better this interaction. While certain efforts have been made to obtain such information in an automated fashion, further innovations are necessary. In this paper we present an annotation tool that can be used to semi-automatically segment leaves in images of rosette plants. This tool, which is designed to exist in a stand-alone fashion but also in cloud based environments, can be used to annotate data directly for the study of plant and leaf growth or to provide annotated datasets for learning-based approaches to extracting phenotypes from images. It relies on an interactive graph-based segmentation algorithm to propagate expert provided priors (in the form of pixels) to the rest of the image, using the random walk formulation to find a good per-leaf segmentation. To evaluate the tool we use standardized datasets available from the LSC and LCC 2015 challenges, achieving an average leaf segmentation accuracy of almost 97% using scribbles as annotations. The tool and source code are publicly available at http://www.phenotiki.com and as a GitHub repository at https://github.com/phenotiki/LeafAnnotationTool

    Learning to Count Leaves in Rosette Plants

    Get PDF
    Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image

    Calmodulin Enhances Cryptochrome Binding to INAD in Drosophila Photoreceptors

    Get PDF
    Light is the main environmental stimulus that synchronizes the endogenous timekeeping systems in most terrestrial organisms. Drosophila cryptochrome (dCRY) is a light-responsive flavoprotein that detects changes in light intensity and wavelength around dawn and dusk. We have previously shown that dCRY acts through Inactivation No Afterpotential D (INAD) in a light-dependent manner on the Signalplex, a multiprotein complex that includes visual-signaling molecules, suggesting a role for dCRY in fly vision. Here, we predict and demonstrate a novel Ca2+-dependent interaction between dCRY and calmodulin (CaM). Through yeast two hybrid, coimmunoprecipitation (Co-IP), nuclear magnetic resonance (NMR) and calorimetric analyses we were able to identify and characterize a CaM binding motif in the dCRY C-terminus. Similarly, we also detailed the CaM binding site of the scaffold protein INAD and demonstrated that CaM bridges dCRY and INAD to form a ternary complex in vivo. Our results suggest a process whereby a rapid dCRY light response stimulates an interaction with INAD, which can be further consolidated by a novel mechanism regulated by CaM

    Machine Learning for Plant Phenotyping Needs Image Processing

    Get PDF
    We found the article by Singh et al. [1] extremely interesting because it introduces and showcases the utility of machine learning for high-throughput data-driven plant phenotyping. With this letter we aim to emphasize the role that image analysis and processing have in the phenotyping pipeline beyond what is suggested in [1], both in analyzing phenotyping data (e.g., to measure growth) and when providing effective feature extraction to be used by machine learning. Key recent reviews have shown that it is image analysis itself (what the authors of [1] consider as part of pre-processing) that has brought a renaissance in phenotyping [2]

    Image analysis:The new bottleneck in plant phenotyping [applications corner]

    Get PDF
    Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and performance) as a result of genotype differences (i.e., differences in the genetic code) and the environmental conditions to which a plant has been exposed [1]?[3]. According to the Food and Agriculture Organization of the United Nations, large-scale experiments in plant phenotyping are a key factor in meeting the agricultural needs of the future to feed the world and provide biomass for energy, while using less water, land, and fertilizer under a constantly evolving environment due to climate change. Working on model plants (such as Arabidopsis), combined with remarkable advances in genotyping, has revolutionized our understanding of biology but has accelerated the need for precision and automation in phenotyping, favoring approaches that provide quantifiable phenotypic information that could be better used to link and find associations in the genotype [4]. While early on, the collection of phenotypes was manual, currently noninvasive, imaging-based methods are increasingly being utilized [5], [6]. However, the rate at which phenotypes are extracted in the field or in the lab is not matching the speed of genotyping and is creating a bottleneck

    Unsupervised and supervised approaches to color space transformation for image coding

    Get PDF
    The linear transformation of input (typically RGB) data into a color space is important in image compression. Most schemes adopt fixed transforms to decorrelate the color channels. Energy compaction transforms such as the Karhunen-Loève (KLT) do entail a complexity increase. Here, we propose a new data-dependent transform (aKLT), that achieves compression performance comparable to the KLT, at a fraction of the computational complexity. More important, we also consider an application-aware setting, in which a classifier analyzes reconstructed images at the receiver's end. In this context, KLT-based approaches may not be optimal and transforms that maximize post-compression classifier performance are more suited. Relaxing energy compactness constraints, we propose for the first time a transform which can be found offline optimizing the Fisher discrimination criterion in a supervised fashion. In lieu of channel decorrelation, we obtain spatial decorrelation using the same color transform as a rudimentary classifier to detect objects of interest in the input image without adding any computational cost. We achieve higher savings encoding these regions at a higher quality, when combined with region-of-interest capable encoders, such as JPEG 2000

    The significance of image compression in plant phenotyping applications

    Get PDF
    We are currently witnessing an increasingly higher throughput in image-based plant phenotyping experiments. The majority of imaging data are collected using complex automated procedures and are then post-processed to extract phenotyping-related information. In this article, we show that the image compression used in such procedures may compromise phenotyping results and this needs to be taken into account. We use three illuminating proof-of-concept experiments that demonstrate that compression (especially in the most common lossy JPEG form) affects measurements of plant traits and the errors introduced can be high. We also systematically explore how compression affects measurement fidelity, quantified as effects on image quality, as well as errors in extracted plant visual traits. To do so, we evaluate a variety of image-based phenotyping scenarios, including size and colour of shoots, leaf and root growth. To show that even visual impressions can be used to assess compression effects, we use root system images as examples. Overall, we find that compression has a considerable effect on several types of analyses (albeit visual or quantitative) and that proper care is necessary to ensure that this choice does not affect biological findings. In order to avoid or at least minimise introduced measurement errors, for each scenario, we derive recommendations and provide guidelines on how to identify suitable compression options in practice. We also find that certain compression choices can offer beneficial returns in terms of reducing the amount of data storage without compromising phenotyping results. This may enable even higher throughput experiments in the future

    Leaf segmentation in plant phenotyping: a collation study

    Get PDF
    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape characteristics, the presence of occlusions and variability in leaf shape and pose, as well as imaging conditions, render this problem challenging. The aim of this paper is to compare several leaf segmentation solutions on a unique and first-of-its-kind dataset containing images from typical phenotyping experiments. In particular, we report and discuss methods and findings of a collection of submissions for the first Leaf Segmentation Challenge of the Computer Vision Problems in Plant Phenotyping workshop in 2014. Four methods are presented: three segment leaves by processing the distance transform in an unsupervised fashion, and the other via optimal template selection and Chamfer matching. Overall, we find that although separating plant from background can be accomplished with satisfactory accuracy (>>90 % Dice score), individual leaf segmentation and counting remain challenging when leaves overlap. Additionally, accuracy is lower for younger leaves. We find also that variability in datasets does affect outcomes. Our findings motivate further investigations and development of specialized algorithms for this particular application, and that challenges of this form are ideally suited for advancing the state of the art. Data are publicly available (online at http://​www.​plant-phenotyping.​org/​datasets) to support future challenges beyond segmentation within this application domain

    An observational study in psychiatric acute patients admitted to General Hospital Psychiatric Wards in Italy

    Get PDF
    OBJECTIVES: this Italian observational study was aimed at collecting data of psychiatric patients with acute episodes entering General Hospital Psychiatric Wards (GHPWs). Information was focused on diagnosis (DSM-IV), reasons of hospitalisation, prescribed treatment, outcome of aggressive episodes, evolution of the acute episode. METHODS: assessments were performed at admission and discharge. Used psychometric scales were the Brief Psychiatric Rating Scale (BPRS), the Modified Overt Aggression Scale (MOAS) and the Nurses' Observation Scale for Inpatient Evaluation (NOSIE-30). RESULTS: 864 adult patients were enrolled in 15 GHPWs: 728 (320 M; mean age 43.6 yrs) completed both admission and discharge visits. A severe psychotic episode with (19.1%) or without (47.7%) aggressive behaviour was the main reason of admission. Schizophrenia (42.8% at admission and 40.1% at discharge) and depression (12.9% at admission and 14.7% at discharge) were the predominant diagnoses. The mean hospital stay was 12 days. The mean (± SD) total score of MOAS at admission, day 7 and discharge was, respectively, 2.53 ± 5.1, 0.38 ± 2.2, and 0.21 ± 1.5. Forty-four (6.0%) patients had episodes of aggressiveness at admission and 8 (1.7%) at day 7. A progressive improvement in each domain/item vs. admission was observed for MOAS and BPRS, while NOSIE-30 did not change from day 4 onwards. The number of patients with al least one psychotic drug taken at admission, in the first 7 days of hospitalisation, and prescribed at discharge, was, respectively: 472 (64.8%), 686 (94.2%) and 676 (92.9%). The respective most frequently psychotic drugs were: BDZs (60.6%, 85.7%, 69.5%), typical anti-psychotics (48.3%, 57.0%, 49.6%), atypical anti-psychotics (35.6%, 41.8%, 39.8%) and antidepressants (40.9%, 48.8%, 43.2%). Rates of patients with one, two or > 2 psychotic drugs taken at admission and day 7, and prescribed at discharge, were, respectively: 24.8%, 8.2% and 13.5% in mono-therapy; 22.0%, 20.6% and 26.6% with two drugs, and 53.2%, 57.8% and 59.0% with > two drugs. Benzodiazepines were the most common drugs both at admission (60.0%) and during hospitalisation (85.7%), and 69.5% were prescribed at discharge. CONCLUSION: patients with psychiatric diseases in acute phase experienced a satisfactory outcome following intensified therapeutic interventions during hospitalisation

    Large-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategies

    Get PDF
    The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code
    corecore