1,081 research outputs found

    Selecting features for object detection using an AdaBoost-compatible evaluation function

    Get PDF
    This paper addresses the problem of selecting features in a visual object detection setup where a detection algorithm is applied to an input image represented by a set of features. The set of features to be employed in the test stage is prepared in two training-stage steps. In the first step, a feature extraction algorithm produces a (possibly large) initial set of features. In the second step, on which this paper focuses, the initial set is reduced using a selection procedure. The proposed selection procedure is based on a novel evaluation function that measures the utility of individual features for a certain detection task. Owing to its design, the evaluation function can be seamlessly embedded into an AdaBoost selection framework. The developed selection procedure is integrated with state-of-the-art feature extraction and object detection methods. The presented system was tested on five challenging detection setups. In three of them, a fairly high detection accuracy was effected by as few as six features selected out of several hundred initial candidates

    Plants Detection, Localization and Discrimination using 3D Machine Vision for Robotic Intra-row Weed Control

    Get PDF
    Weed management is vitally important in crop production systems. However, conventional herbicide-based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weeding offers a possibility of controlling weeds precisely, particularly for weeds growing close to or within crop rows. The fusion of two-dimensional textural images and three-dimensional spatial images to recognize and localize crop plants at different growth stages were investigated. Images of different crop plants at different growth stages with weeds were acquired. Feature extraction algorithms were developed, and different features were extracted and used to train plant and background classifiers, which also addressed the problems of canopy occlusion and leaf damage. Then, the efficacy and accuracy of the proposed methods in classification were demonstrated by experiments. Currently, the algorithms were only developed and tested for broccoli and lettuce. For broccoli plants, the crop plants detection true positive rate was 93.1%, and the false discover rate was 1.1%, with the average crop-plant-localization error of 15.9 mm. For lettuce plants, the crop plants detection true positive rate was 92.3%, and the false discover rate was 4.0%, with the average crop-plant-localization error of 8.5 mm. The results have shown that 3D imaging based plant recognition algorithms are effective and reliable for crop/weed differentiation

    Bounding Box-Free Instance Segmentation Using Semi-Supervised Learning for Generating a City-Scale Vehicle Dataset

    Full text link
    Vehicle classification is a hot computer vision topic, with studies ranging from ground-view up to top-view imagery. In remote sensing, the usage of top-view images allows for understanding city patterns, vehicle concentration, traffic management, and others. However, there are some difficulties when aiming for pixel-wise classification: (a) most vehicle classification studies use object detection methods, and most publicly available datasets are designed for this task, (b) creating instance segmentation datasets is laborious, and (c) traditional instance segmentation methods underperform on this task since the objects are small. Thus, the present research objectives are: (1) propose a novel semi-supervised iterative learning approach using GIS software, (2) propose a box-free instance segmentation approach, and (3) provide a city-scale vehicle dataset. The iterative learning procedure considered: (1) label a small number of vehicles, (2) train on those samples, (3) use the model to classify the entire image, (4) convert the image prediction into a polygon shapefile, (5) correct some areas with errors and include them in the training data, and (6) repeat until results are satisfactory. To separate instances, we considered vehicle interior and vehicle borders, and the DL model was the U-net with the Efficient-net-B7 backbone. When removing the borders, the vehicle interior becomes isolated, allowing for unique object identification. To recover the deleted 1-pixel borders, we proposed a simple method to expand each prediction. The results show better pixel-wise metrics when compared to the Mask-RCNN (82% against 67% in IoU). On per-object analysis, the overall accuracy, precision, and recall were greater than 90%. This pipeline applies to any remote sensing target, being very efficient for segmentation and generating datasets.Comment: 38 pages, 10 figures, submitted to journa

    Plant Localization and Discrimination using 2D+3D Computer Vision for Robotic Intra-row Weed Control

    Get PDF
    Weed management is vitally important in crop production systems. However, conventional herbicide based weed control can lead to negative environmental impacts. Manual weed control is laborious and impractical for large scale production. Robotic weed control offers a possibility of controlling weeds precisely, particularly for weeds growing near or within crop rows. A computer vision system was developed based on Kinect V2 sensor, using the fusion of two-dimensional textural data and three-dimensional spatial data to recognize and localized crop plants different growth stages. Images were acquired of different plant species such as broccoli, lettuce and corn at different growth stages. A database system was developed to organize these images. Several feature extraction algorithms were developed which addressed the problems of canopy occlusion and damaged leaves. With our proposed algorithms, different features were extracted and used to train plant and background classifiers. Finally, the efficiency and accuracy of the proposed classification methods were demonstrated and validated by experiments

    Real-Time Automatic Linear Feature Detection in Images

    Get PDF
    Linear feature detection in digital images is an important low-level operation in computer vision that has many applications. In remote sensing tasks, it can be used to extract roads, railroads, and rivers from satellite or low-resolution aerial images, which can be used for the capture or update of data for geographic information and navigation systems. In addition, it is useful in medical imaging for the extraction of blood vessels from an X-ray angiography or the bones in the skull from a CT or MR image. It also can be applied in horticulture for underground plant root detection in minirhizotron images. In this dissertation, a fast and automatic algorithm for linear feature extraction from images is presented. Under the assumption that linear feature is a sequence of contiguous pixels where the image intensity is locally maximal in the direction of the gradient, linear features are extracted as non-overlapping connected line segments consisting of these contiguous pixels. To perform this task, point process is used to model line segments network in images. Specific properties of line segments in an image are described by an intensity energy model. Aligned segments are favored while superposition is penalized. These constraints are enforced by an interaction energy model. Linear features are extracted from the line segments network by minimizing a modified Candy model energy function using a greedy algorithm whose parameters are determined in a data-driven manner. Experimental results from a collection of different types of linear features (underground plant roots, blood vessels and urban roads) in images demonstrate the effectiveness of the approach
    corecore