56 research outputs found

    An Integrated Method for Optimizing Bridge Maintenance Plans

    Get PDF
    Bridges are one of the vital civil infrastructure assets, essential for economic developments and public welfare. Their large numbers, deteriorating condition, public demands for safe and efficient transportation networks and limited maintenance and intervention budgets pose a challenge, particularly when coupled with the need to respect environmental constraints. This state of affairs creates a wide gap between critical needs for intervention actions, and tight maintenance and rehabilitation funds. In an effort to meet this challenge, a newly developed integrated method for optimized maintenance and intervention plans for reinforced concrete bridge decks is introduced. The method encompasses development of five models: surface defects evaluation, corrosion severities evaluation, deterioration modeling, integrated condition assessment, and optimized maintenance plans. These models were automated in a set of standalone computer applications, coded using C#.net in Matlab environment. These computer applications were subsequently combined to form an integrated method for optimized maintenance and intervention plans. Four bridges and a dataset of bridge images were used in testing and validating the developed optimization method and its five models. The developed models have unique features and demonstrated noticeable performance and accuracy over methods used in practice and those reported in the literature. For example, the accuracy of the surface defects detection and evaluation model outperforms those of widely-recognized machine leaning and deep learning models; reducing detection, recognition and evaluation of surface defects error by 56.08%, 20.2% and 64.23%, respectively. The corrosion evaluation model comprises design of a standardized amplitude rating system that circumvents limitations of numerical amplitude-based corrosion maps. In the integrated condition, it was inferred that the developed model accomplished consistent improvement over the visual inspection procedures in-use by the Ministry of Transportation in Quebec. Similarly, the deterioration model displayed average enhancement in the prediction accuracies by 60% when compared against the most commonly-utilized weibull distribution. The performance of the developed multi-objective optimization model yielded 49% and 25% improvement over that of genetic algorithm in a five-year study period and a twenty five-year study period, respectively. At the level of thirty five-year study period, unlike the developed model, classical meta-heuristics failed to find feasible solutions within the assigned constraints. The developed integrated platform is expected to provide an efficient tool that enables decision makers to formulate sustainable maintenance plans that optimize budget allocations and ensure efficient utilization of resources

    Effective plant discrimination based on the combination of local binary pattern operators and multiclass support vector machine methods

    Get PDF
    Accurate crop and weed discrimination plays a critical role in addressing the challenges of weed management in agriculture. The use of herbicides is currently the most common approach to weed control. However, herbicide resistant plants have long been recognised as a major concern due to the excessive use of herbicides. Effective weed detection techniques can reduce the cost of weed management and improve crop quality and yield. A computationally efficient and robust plant classification algorithm is developed and applied to the classification of three crops: Brassica napus (canola), Zea mays (maize/corn), and radish. The developed algorithm is based on the combination of Local Binary Pattern (LBP) operators, for the extraction of crop leaf textural features and Support vector machine (SVM) method, for multiclass plant classification. This paper presents the first investigation of the accuracy of the combined LBP algorithms, trained using a large dataset of canola, radish and barley leaf images captured by a testing facility under simulated field conditions. The dataset has four subclasses, background, canola, corn, and radish, with 24,000 images used for training and 6000 images, for validation. The dataset is referred herein as “bccr-segset” and published online. In each subclass, plant images are collected at four crop growth stages. Experimentally, the algorithm demonstrates plant classification accuracy as high as 91.85%, for the four classes. © 2018 China Agricultural Universit

    Towards Automated Weed Detection Through Two-Stage Semantic Segmentation of Tobacco and Weed Pixels in Aerial Imagery

    Get PDF
    In precision farming, weed detection is required for precise weedicide application, and the detection of tobacco crops is necessary for pesticide application on tobacco leaves. Automated accurate detection of tobacco and weeds through aerial visual cues holds promise. Precise weed detection in crop field imagery can be treated as a semantic segmentation problem. Many image processing, classical machine learning, and deep learning-based approaches have been devised in the past, out of which deep learning-based techniques promise better accuracies for semantic segmentation, i.e., pixel-level classification. We present a new method that improves the precision of pixel-level inter-class classification of the crop and the weed pixels. The technique applies semantic segmentation in two stages. In stage I, a binary pixel-level classifier is developed to segment background and vegetation. In stage II, a three-class pixel-level classifier is designed to classify background, weeds, and tobacco. The output of the first stage is the input of the second stage. To test our designed classifier, a new tobacco crop aerial dataset was captured and manually labeled pixel-wise. The two-stage semantic segmentation architecture has shown better tobacco and weeds pixel-level classification precision. The intersection over union (IOU) for the tobacco crop was improved from 0.67 to 0.85, and IOU for weeds enhanced from 0.76 to 0.91 with the new approach compared to the traditional one-stage semantic segmentation application. We observe that in stage I shallower, a smaller semantic segmentation model is enough compared to stage II, where a segmentation network with more neurons serves the purpose of good detection

    Local Binary Pattern based algorithms for the discrimination and detection of crops and weeds with similar morphologies

    Get PDF
    In cultivated agricultural fields, weeds are unwanted species that compete with the crop plants for nutrients, water, sunlight and soil, thus constraining their growth. Applying new real-time weed detection and spraying technologies to agriculture would enhance current farming practices, leading to higher crop yields and lower production costs. Various weed detection methods have been developed for Site-Specific Weed Management (SSWM) aimed at maximising the crop yield through efficient control of weeds. Blanket application of herbicide chemicals is currently the most popular weed eradication practice in weed management and weed invasion. However, the excessive use of herbicides has a detrimental impact on the human health, economy and environment. Before weeds are resistant to herbicides and respond better to weed control strategies, it is necessary to control them in the fallow, pre-sowing, early post-emergent and in pasture phases. Moreover, the development of herbicide resistance in weeds is the driving force for inventing precision and automation weed treatments. Various weed detection techniques have been developed to identify weed species in crop fields, aimed at improving the crop quality, reducing herbicide and water usage and minimising environmental impacts. In this thesis, Local Binary Pattern (LBP)-based algorithms are developed and tested experimentally, which are based on extracting dominant plant features from camera images to precisely detecting weeds from crops in real time. Based on the efficient computation and robustness of the first LBP method, an improved LBP-based method is developed based on using three different LBP operators for plant feature extraction in conjunction with a Support Vector Machine (SVM) method for multiclass plant classification. A 24,000-image dataset, collected using a testing facility under simulated field conditions (Testbed system), is used for algorithm training, validation and testing. The dataset, which is published online under the name “bccr-segset”, consists of four subclasses: background, Canola (Brassica napus), Corn (Zea mays), and Wild radish (Raphanus raphanistrum). In addition, the dataset comprises plant images collected at four crop growth stages, for each subclass. The computer-controlled Testbed is designed to rapidly label plant images and generate the “bccr-segset” dataset. Experimental results show that the classification accuracy of the improved LBP-based algorithm is 91.85%, for the four classes. Due to the similarity of the morphologies of the canola (crop) and wild radish (weed) leaves, the conventional LBP-based method has limited ability to discriminate broadleaf crops from weeds. To overcome this limitation and complex field conditions (illumination variation, poses, viewpoints, and occlusions), a novel LBP-based method (denoted k-FLBPCM) is developed to enhance the classification accuracy of crops and weeds with similar morphologies. Our contributions include (i) the use of opening and closing morphological operators in pre-processing of plant images, (ii) the development of the k-FLBPCM method by combining two methods, namely, the filtered local binary pattern (LBP) method and the contour-based masking method with a coefficient k, and (iii) the optimal use of SVM with the radial basis function (RBF) kernel to precisely identify broadleaf plants based on their distinctive features. The high performance of this k-FLBPCM method is demonstrated by experimentally attaining up to 98.63% classification accuracy at four different growth stages for all classes of the “bccr-segset” dataset. To evaluate performance of the k-FLBPCM algorithm in real-time, a comparison analysis between our novel method (k-FLBPCM) and deep convolutional neural networks (DCNNs) is conducted on morphologically similar crops and weeds. Various DCNN models, namely VGG-16, VGG-19, ResNet50 and InceptionV3, are optimised, by fine-tuning their hyper-parameters, and tested. Based on the experimental results on the “bccr-segset” dataset collected from the laboratory and the “fieldtrip_can_weeds” dataset collected from the field under practical environments, the classification accuracies of the DCNN models and the k-FLBPCM method are almost similar. Another experiment is conducted by training the algorithms with plant images obtained at mature stages and testing them at early stages. In this case, the new k-FLBPCM method outperformed the state-of-the-art CNN models in identifying small leaf shapes of canola-radish (crop-weed) at early growth stages, with an order of magnitude lower error rates in comparison with DCNN models. Furthermore, the execution time of the k-FLBPCM method during the training and test phases was faster than the DCNN counterparts, with an identification time difference of approximately 0.224ms per image for the laboratory dataset and 0.346ms per image for the field dataset. These results demonstrate the ability of the k-FLBPCM method to rapidly detect weeds from crops of similar appearance in real time with less data, and generalize to different size plants better than the CNN-based methods

    Monitorización 3D de cultivos y cartografía de malas hierbas mediante vehículos aéreos no tripulados para un uso sostenible de fitosanitarios

    Get PDF
    En esta Tesis Doctoral se han utilizado las imágenes procedentes de un UAV para abordar la sostenibilidad de la aplicación de productos fitosanitarios mediante la generación de mapas que permitan su aplicación localizada. Se han desarrollado dos formas diferentes y complementarias para lograr este objetivo: 1) la reducción de la aplicación de herbicidas en post-emergencia temprana mediante el diseño de tratamientos dirigidos a las zonas infestadas por malas hierbas en varios cultivos herbáceos; y 2) la caracterización tridimensional (arquitectura y volumen) de cultivos leñosos para el diseño de tratamientos de aplicación localizada de fitosanitarios dirigidos a la parte aérea de los mismos. Para afrontar el control localizado de herbicidas se han estudiado la configuración y las especificaciones técnicas de un UAV y de los sensores embarcados a bordo para su aplicación en la detección temprana de malas hierbas y contribuir a la generación de mapas para un control localizado en tres cultivos herbáceos: maíz, trigo y girasol. A continuación, se evaluaron los índices espectrales más precisos para su uso en la discriminación de suelo desnudo y vegetación (cultivo y malas hierbas) en imágenes-UAV tomadas sobre dichos cultivos en fase temprana. Con el fin de automatizar dicha discriminación se implementó en un entorno OBIA un método de cálculo de umbrales. Finalmente, se desarrolló una metodología OBIA automática y robusta para la discriminación de cultivo, suelo desnudo y malas hierbas en los tres cultivos estudiados, y se evaluó la influencia sobre su funcionamiento de distintos parámetros relacionados con la toma de imágenes UAV (solape, tipo de sensor, altitud de vuelo, momento de programación de los vuelos, entre otros). Por otra parte y para facilitar el diseño de tratamientos fitosanitarios ajustados a las necesidades de los cultivos leñosos se ha desarrollado una metodología OBIA automática y robusta para la caracterización tridimensional (arquitectura y volumen) de cultivos leñosos usando imágenes y modelos digitales de superficies generados a partir de imágenes procedentes de un UAV. Asimismo, se evaluó la influencia de distintos parámetros relacionados con la toma de las imágenes (solape, tipo de sensor, altitud de vuelo) sobre el funcionamiento del algoritmo OBIA diseñado

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    A novel segmentation approach for crop modeling using a plenoptic light-field camera: going from 2D to 3D

    Get PDF
    Crop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.Magíster en Ingeniería ElectrónicaMaestríahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0001556911Porque aun me encuentro desarrollando la investigación y quiero darle mas profundidad

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G

    A Multi-Sensor Phenotyping System: Applications on Wheat Height Estimation and Soybean Trait Early Prediction

    Get PDF
    Phenotyping is an essential aspect for plant breeding research since it is the foundation of the plant selection process. Traditional plant phenotyping methods such as measuring and recording plant traits manually can be inefficient, laborious and prone to error. With the help of modern sensing technologies, high-throughput field phenotyping is becoming popular recently due to its ability of sensing various crop traits non-destructively with high efficiency. A multi-sensor phenotyping system equipped with red-green-blue (RGB) cameras, radiometers, ultrasonic sensors, spectrometers, a global positioning system (GPS) receiver, a pyranometer, a temperature and relative humidity probe and a light detection and ranging (LiDAR) was first constructed, and a LabVIEW program was developed for sensor controlling and data acquisition. Two studies were conducted focusing on system performance examination and data exploration respectively. The first study was to compare wheat height measurements from ultrasonic sensor and LiDAR. Canopy heights of 100 wheat plots were estimated five times over the season by the ground phenotyping system, and the results were compared to manual measurements. Overall, LiDAR provided the better estimations with root mean square error (RMSE) of 0.05 m and R2 of 0.97. Ultrasonic sensor did not perform well due to the style of our application. In conclusion LiDAR was recommended as a reliable method for wheat height evaluation. The second study was to explore the possibility of early predicting soybean traits through color and texture features of canopy images. Six thousand three hundred and eighty-three RGB images were captured at V4/V5 growth stage over 5667 soybean plots growing at four locations. One hundred and forty color features and 315 gray-level co-occurrence matrix (GLCM)-based texture features were derived from each image. Another two variables were also introduced to account for the location and timing difference between images. Cubist and Random Forests were used for regression and classification modelling respectively. Yield (RMSE=9.82, R2=0.68), Maturity (RMSE=3.70, R2=0.76) and Seed Size (RMSE=1.63, R2=0.53) were identified as potential soybean traits that might be early-predictable. Advisor: Yufeng G
    corecore