45 research outputs found

    Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

    Get PDF
    Convolutional Neural Networks (CNN) have demonstrated their capabilities on the agronomical field, especially for plant visual symptoms assessment. As these models grow both in the number of training images and in the number of supported crops and diseases, there exist the dichotomy of (1) generating smaller models for specific crop or, (2) to generate a unique multi-crop model in a much more complex task (especially at early disease stages) but with the benefit of the entire multiple crop image dataset variability to enrich image feature description learning. In this work we first introduce a challenging dataset of more than one hundred-thousand images taken by cell phone in real field wild conditions. This dataset contains almost equally distributed disease stages of seventeen diseases and five crops (wheat, barley, corn, rice and rape-seed) where several diseases can be present on the same picture. When applying existing state of the art deep neural network methods to validate the two hypothesised approaches, we obtained a balanced accuracy (BAC=0.92) when generating the smaller crop specific models and a balanced accuracy (BAC=0.93) when generating a single multi-crop model. In this work, we propose three different CNN architectures that incorporate contextual non-image meta-data such as crop information onto an image based Convolutional Neural Network. This combines the advantages of simultaneously learning from the entire multi-crop dataset while reducing the complexity of the disease classification tasks. The crop-conditional plant disease classification network that incorporates the contextual information by concatenation at the embedding vector level obtains a balanced accuracy of 0.98 improving all previous methods and removing 71% of the miss-classifications of the former methods

    Contaje de mitosis en imágenes histológicas mediante redes neuronales convolucionales

    Get PDF
    El diagnóstico último del cáncer se realiza por los patólogos mediante el análisis de imágenes histológicas. Uno de los marcadores más importantes en el pronóstico y detección temprana del mismo es el denominado grado de proliferación, que se estima mediante el contaje de figuras mitóticas en imágenes histológicas tintadas con hematoxilina y eosina. Los patólogos realizan este contaje de mitosis de manera manual. Este proceso es costoso y subjetivo, existiendo discrepancias entre los expertos. En los últimos años, el aumento de microscopios escáneres ha permitido la digitalización de las muestras histológicas y su posterior procesamiento. En este trabajo se presenta un método para el contaje automático de mitosis en imágenes histológicas. Este método comprende dos fases: 1) selección de regiones candidatas a mitosis basada en técnicas convencionales de procesamiento de imagen; 2) clasificación mediante Redes Neuronales Convolucionales y técnicas de Deep Learning. El método ha sido validado sobre una base de datos con 656 casos, y se ha obtenido una sensibilidad de 0.617 y un valor de F1 de 0.541 en consonancia con el estado del arte

    Few Shot Learning in Histopathological Images:Reducing the Need of Labeled Data on Biological Datasets

    Get PDF
    Although deep learning pathology diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, they still require a huge amount of well annotated data for training. Generating such extensive and well labelled datasets is time consuming and is not feasible for certain tasks and so, most of the medical datasets available are scarce in images and therefore, not enough for training. In this work we validate that the use of few shot learning techniques can transfer knowledge from a well defined source domain from Colon tissue into a more generic domain composed by Colon, Lung and Breast tissue by using very few training images. Our results show that our few-shot approach is able to obtain a balanced accuracy (BAC) of 90% with just 60 training images, even for the Lung and Breast tissues that were not present on the training set. This outperforms the finetune transfer learning approach that obtains 73% BAC with 60 images and requires 600 images to get up to 81% BAC.This study has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 732111 (PICCOLO project)

    Magnetic field-based arc stability sensor for electric arc furnaces

    Get PDF
    During the last decades the strategy to define the optimal Electric Arc Furnaces (EAF) electrical operational parameters has been constantly evolving. Foaming slag practice is currently used to allow high power factors that ensures higher energy efficiency. However, this performance depends on strict electric arc stability control. Control strategies for these are normally defined for alternating current furnaces (AC EAF) and are based on intrusive and highly expensive systems. In this work we analyze the variation of the magnetic field vector around the direct current EAF (DC EAF) and its relationship with arc stability. We propose a cheap stability control system with no installation or integration requirements and thus, easily implementable to both AC and DC EAFs. To this end we have built a non-intrusive and low-cost 3-axis Hall-effect sensor that can be mounted neighboring the furnace’s electrical bars. The sensor allows acquiring the magnetic field magnitude and orientation that provides a newly defined arc stability factor metric. This proposed Arc Stability Index has been compared with three different alternative well established and more expensive measurement methodologies obtaining with similar results. The proposed index serves as a closed loop signal to the electrical regulation for controlling the arc voltage, ensuring the most convenient arc length that guaranties non-instabilities. The new system was developed and industrially validated at two different DC EAF’s in ArcelorMittal demonstrating an improvement of 6.7 kWh per Liquid steel ton during the evaluated period and a time reduction of 1.1 min per heat over the current standard procedure. Additional validation tests were also carried out also in ArcelorMittal AC EAF proving the capability of this technology for both AC and DC of furnaces.Partial financial support of this work by the Basque Govern-ment (Hazitek AURRERAB ZE-2017/00009 and FASIN ZE-2016/0016 Projects) is gratefully acknowledged

    Biologically-inspired data decorrelation for hyperspectral imaging

    Get PDF
    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classificatio

    A Probabilistic Model and Capturing Device for Remote Simultaneous Estimation of Spectral Emissivity and Temperature of Hot Emissive Materials

    Get PDF
    Estimating the temperature of hot emissive samples (e.g. liquid slag) in the context of harsh industrial environments such as steelmaking plants is a crucial yet challenging task, which is typically addressed by means of methods that require physical contact. Current remote methods require information on the emissivity of the sample. However, the spectral emissivity is dependent on the sample composition and temperature itself, and it is hardly measurable unless under controlled laboratory procedures. In this work, we present a portable device and associated probabilistic model that can simultaneously produce quasi real-time estimates for temperature and spectral emissivity of hot samples in the [0.2, 12.0μm ] range at distances of up to 20m . The model is robust against variable atmospheric conditions, and the device is presented together with a quick calibration procedure that allows for in field deployment in rough industrial environments, thus enabling in line measurements. We validate the temperature and emissivity estimates by our device against laboratory equipment under controlled conditions in the [550, 850∘C ] temperature range for two solid samples with well characterized spectral emissivity’s: alumina ( α−Al2O3 ) and hexagonal boron nitride ( h−BN ). The analysis of the results yields Root Mean Squared Errors of 32.3∘C and 5.7∘C respectively, and well correlated spectral emissivity’s.This work was supported in part by the Basque Government (Hazitek AURRERA B: Advanced and Useful REdesign of CSP process for new steel gRAdes) under Grant ZE-2017/00009

    Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild

    Get PDF
    Fungal infection represents up to 50% of yield losses, making it necessary to apply effective and cost efficient fungicide treatments, whose efficacy depends on infestation type, situation and time. In these cases, a correct and early identification of the specific infection is mandatory to minimize yield losses and increase the efficacy and efficiency of the treatments. Over the last years, a number of image analysis-based methodologies have been proposed for automatic image disease identification. Among these methods, the use of Deep Convolutional Neural Networks (CNNs) has proven tremendously successful for different visual classification tasks. In this work we extend previous work by Johannes et al. (2017) with an adapted Deep Residual Neural Network-based algorithm to deal with the detection of multiple plant diseases in real acquisition conditions where different adaptions for early disease detection have been proposed. This work analyses the performance of early identification of three relevant European endemic wheat diseases: Septoria (Septoria triciti), Tan Spot (Drechslera triciti-repentis) and Rust (Puccinia striiformis & Puccinia recondita)

    Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets

    Get PDF
    Weeds compete with productive crops for soil, nutrients and sunlight and are therefore a major contributor to crop yield loss, which is why safer and more effective herbicide products are continually being developed. Digital evaluation tools to automate and homogenize field measurements are of vital importance to accelerate their development. However, the development of these tools requires the generation of semantic segmentation datasets, which is a complex, time-consuming and not easily affordable task. In this paper, we present a deep learning segmentation model that is able to distinguish between different plant species at the pixel level. First, we have generated three extensive datasets targeting one crop species (Zea mays), three grass species (Setaria verticillata, Digitaria sanguinalis, Echinochloa crus-galli) and three broadleaf species (Abutilon theophrasti, Chenopodium albums, Amaranthus retroflexus). The first dataset consists of real field images that were manually annotated. The second dataset is composed of images of plots where only one species is present at a time and the third type of dataset was synthetically generated from images of individual plants mimicking the distribution of real field images. Second, we have proposed a semantic segmentation architecture by extending a PSPNet architecture with an auxiliary classification loss to aid model convergence. Our results show that the network performance increases when supplementing the real field image dataset with the other types of datasets without increasing the manual annotation effort. More specifically, the use of the real field dataset obtains a Dice-Søensen Coefficient (DSC) score of 25.32. This performance increases when this dataset is combined with the single-species class dataset (DSC=47.97) or the synthetic dataset (DSC=45.20). As for the proposed model, the ablation method shows that by removing the proposed auxiliary classification loss, the segmentation performance decreases (DSC=45.96) compared to the proposed architecture method (DSC=47.97). The proposed method shows better performance than the current state of the art. In addition, the use of proposed single-species or synthetic datasets can double the performance of the algorithm than when using real datasets without additional manual annotation effort.We would like to thank BASF technicians Rainer Oberst, Gerd Kraemer, Hikal Gad, Javier Romero and Juan Manuel Contreras, as well as Amaia Ortiz-Barredo from Neiker for their support in the design of the experiments and the generation of the data sets used in this work. This was partially supported by the Basque Government through ELKARTEK project BASQNET(ref K-2021/00014)

    Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case

    Get PDF
    Disease diagnosis based on the detection of early symptoms is a usual threshold taken into account for integrated pest management strategies. Early phytosanitary treatment minimizes yield losses and increases the efficacy and efficiency of the treatments. However, the appearance of new diseases associated to new resistant crop variants complicates their early identification delaying the application of the appropriate corrective actions. The use of image based automated identification systems can leverage early detection of diseases among farmers and technicians but they perform poorly under real field conditions using mobile devices. A novel image processing algorithm based on candidate hot-spot detection in combination with statistical inference methods is proposed to tackle disease identification in wild conditions. This work analyses the performance of early identification of three European endemic wheat diseases – septoria, rust and tan spot. The analysis was done using 7 mobile devices and more than 3500 images captured in two pilot sites in Spain and Germany during 2014, 2015 and 2016. Obtained results reveal AuC (Area under the Receiver Operating Characteristic –ROC– Curve) metrics higher than 0.80 for all the analyzed diseases on the pilot tests under real conditions
    corecore