16,307 research outputs found

    Designing a fruit identification algorithm in orchard conditions to develop robots using video processing and majority voting based on hybrid artificial neural network

    Get PDF
    The first step in identifying fruits on trees is to develop garden robots for different purposes such as fruit harvesting and spatial specific spraying. Due to the natural conditions of the fruit orchards and the unevenness of the various objects throughout it, usage of the controlled conditions is very difficult. As a result, these operations should be performed in natural conditions, both in light and in the background. Due to the dependency of other garden robot operations on the fruit identification stage, this step must be performed precisely. Therefore, the purpose of this paper was to design an identification algorithm in orchard conditions using a combination of video processing and majority voting based on different hybrid artificial neural networks. The different steps of designing this algorithm were: (1) Recording video of different plum orchards at different light intensities; (2) converting the videos produced into its frames; (3) extracting different color properties from pixels; (4) selecting effective properties from color extraction properties using hybrid artificial neural network-harmony search (ANN-HS); and (5) classification using majority voting based on three classifiers of artificial neural network-bees algorithm (ANN-BA), artificial neural network-biogeography-based optimization (ANN-BBO), and artificial neural network-firefly algorithm (ANN-FA). Most effective features selected by the hybrid ANN-HS consisted of the third channel in hue saturation lightness (HSL) color space, the second channel in lightness chroma hue (LCH) color space, the first channel in L*a*b* color space, and the first channel in hue saturation intensity (HSI). The results showed that the accuracy of the majority voting method in the best execution and in 500 executions was 98.01% and 97.20%, respectively. Based on different performance evaluation criteria of the classifiers, it was found that the majority voting method had a higher performance.European Union (EU) under Erasmus+ project entitled “Fostering Internationalization in Agricultural Engineering in Iran and Russia” [FARmER] with grant number 585596-EPP-1-2017-1-DE-EPPKA2-CBHE-JPinfo:eu-repo/semantics/publishedVersio

    Developing deep learning methods for aquaculture applications

    Get PDF
    Alzayat Saleh developed a computer vision framework that can aid aquaculture experts in analyzing fish habitats. In particular, he developed a labelling efficient method of training a CNN-based fish-detector and also developed a model that estimates the fish weight directly from its image

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Automated artemia length measurement using U-shaped fully convolutional networks and second-order anisotropic Gaussian kernels

    No full text
    The brine shrimp Artemia, a small crustacean zooplankton organism, is universally used as live prey for larval fish and shrimps in aquaculture. In Artemia studies, it would be highly desired to have access to automated techniques to obtain the length information from Anemia images. However, this problem has so far not been addressed in literature. Moreover, conventional image-based length measurement approaches cannot be readily transferred to measure the Artemia length, due to the distortion of non-rigid bodies, the variation over growth stages and the interference from the antennae and other appendages. To address this problem, we compile a dataset containing 250 images as well as the corresponding label maps of length measuring lines. We propose an automated Anemia length measurement method using U-shaped fully convolutional networks (UNet) and second-order anisotropic Gaussian kernels. For a given Artemia image, the designed UNet model is used to extract a length measuring line structure, and, subsequently, the second-order Gaussian kernels are employed to transform the length measuring line structure into a thin measuring line. For comparison, we also follow conventional fish length measurement approaches and develop a non-learning-based method using mathematical morphology and polynomial curve fitting. We evaluate the proposed method and the competing methods on 100 test images taken from the dataset compiled. Experimental results show that the proposed method can accurately measure the length of Artemia objects in images, obtaining a mean absolute percentage error of 1.16%

    Simultaneous mass estimation and class classification of scrap metals using deep learning

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksWhile deep learning has helped improve the performance of classification, object detection, and segmentation in recycling, its potential for mass prediction has not yet been explored. Therefore, this study proposes a system for mass prediction with and without feature extraction and selection, including principal component analysis (PCA). These feature extraction methods are evaluated on a combined Cast (C), Wrought (W) and Stainless Steel (SS) image dataset using state-of-the-art machine learning and deep learning algorithms for mass prediction. After that, the best mass prediction framework is combined with a DenseNet classifier, resulting in multiple outputs that perform both object classification and object mass prediction. The proposed architecture consists of a DenseNet neural network for classification and a backpropagation neural network (BPNN) for mass prediction, which uses up to 24 features extracted from depth images. The proposed method obtained 0.82 R2, 0.2 RMSE, and 0.28 MAE for the regression for mass prediction with a classification performance of 95% for the C&W test dataset using the DenseNet+BPNN+PCA model. The DenseNet+BPNN+None model without the selected feature (None) used for the CW&SS test data had a lower performance for both classification of 80% and the regression (0.71 R2, 0.31 RMSE, and 0.32 MAE). The presented method has the potential to improve the monitoring of the mass composition of waste streams and to optimize robotic and pneumatic sorting systems by providing a better understanding of the physical properties of the objects being sorted.Peer ReviewedPostprint (author's final draft

    Estimating mass of harvested Asian seabass Lates calcarifer from images

    Get PDF
    Total of 1072 Asian seabass or barramundi (Lates calcarifer) were harvested at two different locations in Queensland, Australia. Each fish was digitally photographed and weighed. A subsample of 200 images (100 from each location) were manually segmented to extract the fish-body area (S in cm2), excluding all fins. After scaling the segmented images to 1mm per pixel, the fish mass values (M in grams) were fitted by a single-factor model ( M=aS1.5 , a=0.1695 ) achieving the coefficient of determination (R2) and the Mean Absolute Relative Error (MARE) of R2=0.9819 and MARE=5.1% , respectively. A segmentation Convolutional Neural Network (CNN) was trained on the 200 hand-segmented images, and then applied to the rest of the available images. The CNN predicted fish-body areas were used to fit the mass-area estimation models: the single-factor model, M=aS1.5 , a=0.170 , R2=0.9819 , MARE=5.1% ; and the two-factor model, M=aSb , a=0.124 , b=0.155 , R2=0.9834 , MARE=4.5

    Automated identification of Fos expression

    Get PDF
    The concentration of Fos, a protein encoded by the immediate-early gene c-fos, provides a measure of synaptic activity that may not parallel the electrical activity of neurons. Such a measure is important for the difficult problem of identifying dynamic properties of neuronal circuitries activated by a variety of stimuli and behaviours. We employ two-stage statistical pattern recognition to identify cellular nuclei that express Fos in two-dimensional sections of rat forebrain after administration of antipsychotic drugs. In stage one, we distinguish dark-stained candidate nuclei from image background by a thresholding algorithm and record size and shape measurements of these objects. In stage two, we compare performance of linear and quadratic discriminants, nearest-neighbour and artificial neural network classifiers that employ functions of these measurements to label candidate objects as either Fos nuclei, two touching Fos nuclei or irrelevant background material. New images of neighbouring brain tissue serve as test sets to assess generalizability of the best derived classification rule, as determined by lowest cross-validation misclassification rate. Three experts, two internal and one external, compare manual and automated results for accuracy assessment. Analyses of a subset of images on two separate occasions provide quantitative measures of inter- and intra-expert consistency. We conclude that our automated procedure yields results that compare favourably with those of the experts and thus has potential to remove much of the tedium, subjectivity and irreproducibility of current Fos identification methods in digital microscopy
    corecore