11 research outputs found

    Automated Signal Processing Applied to Volatile-Based Inspection of Greenhouse Crops

    Get PDF
    Gas chromatograph–mass spectrometers (GC-MS) have been used and shown utility for volatile-based inspection of greenhouse crops. However, a widely recognized difficulty associated with GC-MS application is the large and complex data generated by this instrument. As a consequence, experienced analysts are often required to process this data in order to determine the concentrations of the volatile organic compounds (VOCs) of interest. Manual processing is time-consuming, labour intensive and may be subject to errors due to fatigue. The objective of this study was to assess whether or not GC-MS data can also be automatically processed in order to determine the concentrations of crop health associated VOCs in a greenhouse. An experimental dataset that consisted of twelve data files was processed both manually and automatically to address this question. Manual processing was based on simple peak integration while the automatic processing relied on the algorithms implemented in the MetAlign™ software package. The results of automatic processing of the experimental dataset resulted in concentrations similar to that after manual processing. These results demonstrate that GC-MS data can be automatically processed in order to accurately determine the concentrations of crop health associated VOCs in a greenhouse. When processing GC-MS data automatically, noise reduction, alignment, baseline correction and normalisation are required

    Investigation on combinations of colour indices and threshold techniques in vegetation segmentation for volunteer potato control in sugar beet

    No full text
    Robust vegetation segmentation is required for a vision-based weed control robot in an agricultural field operation. The output of vegetation segmentation is a fundamental element in the subsequent process of weed/crop discrimination as well as weed control actuation. Given the abundance of colour indices and thresholding techniques, it is still far from clear how to choose a proper threshold technique in combination with a colour index for vegetation segmentation under agricultural field conditions. In this research, the performance of 40 combinations of eight colour indices and five thresholding techniques found in the literature was assessed to identify which combination works the best given varying field conditions in terms of illumination intensity, shadow presence and plant size. It was also assessed whether it was better to use one specific combination at all times or whether the combination should be adapted to the field conditions at hand. A clear difference in performance, represented in terms of MA (Modified Accuracy) which indicates the harmonic mean of relative vegetation area error and balanced accuracy, was observed among various combinations under the given conditions. On the image dataset that was used in this study, CIVE+Kapur (Colour Index of Vegetation Extraction+Max Entropy threshold) showed the best performance while VEG+Kapur (Vegetative Index+Max Entropy threshold) showed the worst. Adapting the combination to the given conditions yielded a slightly higher performance than when using a single combination for all (in this case CIVE+Kapur). Consistent results were obtained when validated on a different independent image dataset. Although a slightly higher performance was achieved when adapting the combination to the field conditions, this slight improvement seems not to outweigh the potential investment in sensor technology and software that are needed in practice to accurately determine the different conditions in the field. Therefore, the expected advantage of adapting the combination to the field condition is not large

    Fruit Detectability Analysis for Different Camera Positions in Sweet-Pepper

    No full text
    For robotic harvesting of sweet-pepper fruits in greenhouses a sensor system is required to detect and localize the fruits on the plants. Due to the complex structure of the plant, most fruits are (partially) occluded when an image is taken from one viewpoint only. In this research the effect of multiple camera positions and viewing angles on fruit visibility and detectability was investigated. A recording device was built which allowed to place the camera under different azimuth and zenith angles and to move the camera horizontally along the crop row. Fourteen camera positions were chosen and the fruit visibility in the recorded images was manually determined for each position. For images taken from one position only with the criterion of maximum 50% occlusion per fruit, the fruit detectability (FD) was in no case higher than 69%. The best single positions were the front views and looking with a zenith angle of 60° upwards. The FD increased when a combination was made of multiple viewpoint positions. With a combination of five favourite positions the maximum FD was 90%

    Fruit Detectability Analysis for Different Camera Positions in Sweet-Pepper

    No full text
    For robotic harvesting of sweet-pepper fruits in greenhouses a sensor system is required to detect and localize the fruits on the plants. Due to the complex structure of the plant, most fruits are (partially) occluded when an image is taken from one viewpoint only. In this research the effect of multiple camera positions and viewing angles on fruit visibility and detectability was investigated. A recording device was built which allowed to place the camera under different azimuth and zenith angles and to move the camera horizontally along the crop row. Fourteen camera positions were chosen and the fruit visibility in the recorded images was manually determined for each position. For images taken from one position only with the criterion of maximum 50% occlusion per fruit, the fruit detectability (FD) was in no case higher than 69%. The best single positions were the front views and looking with a zenith angle of 60° upwards. The FD increased when a combination was made of multiple viewpoint positions. With a combination of five favourite positions the maximum FD was 90%

    Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information

    No full text
    One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features such as colour, shape and texture have been used individually or in combination for classification studies but they have proved unable to reach the required classification accuracy under natural and varying daylight conditions. A classification algorithm was developed using a Bag-of-Visual-Words (BoVW) model based on Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) features with crop row information in the form of the Out-of-Row Regional Index (ORRI). The highest classification accuracy (96.5% with zero false-negatives) was obtained using SIFT and ORRI with Support Vector Machine (SVM) which is considerably better than previously reported research although its 7% false-positives deviated from the requirements. The average classification time of 0.10–0.11 s met the real-time requirements. The SIFT descriptor showed better classification accuracy than the SURF, but classification time did not vary significantly. Adding location information (ORRI) significantly improved overall classification accuracy. SVM showed better classification performance than random forest and neural network. The proposed approach proved its potential under varying natural light conditions, but implementing a practical system, including vegetation segmentation and weed removal may potentially reduce the overall performance and more research is needed

    Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information

    No full text
    One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features such as colour, shape and texture have been used individually or in combination for classification studies but they have proved unable to reach the required classification accuracy under natural and varying daylight conditions. A classification algorithm was developed using a Bag-of-Visual-Words (BoVW) model based on Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust Feature (SURF) features with crop row information in the form of the Out-of-Row Regional Index (ORRI). The highest classification accuracy (96.5% with zero false-negatives) was obtained using SIFT and ORRI with Support Vector Machine (SVM) which is considerably better than previously reported research although its 7% false-positives deviated from the requirements. The average classification time of 0.10–0.11 s met the real-time requirements. The SIFT descriptor showed better classification accuracy than the SURF, but classification time did not vary significantly. Adding location information (ORRI) significantly improved overall classification accuracy. SVM showed better classification performance than random forest and neural network. The proposed approach proved its potential under varying natural light conditions, but implementing a practical system, including vegetation segmentation and weed removal may potentially reduce the overall performance and more research is needed

    Quantification of simulated cow urine puddle areas using a thermal IR camera

    No full text
    In Europe, National Emission Ceilings (NEC) have been set to regulate the emissions of harmful gases, like ammonia (NH3). From NH3 emission models and a sensitivity analysis, it is known that one of the major variables that determines NH3 emission from dairy cow houses is the urine puddle area on the floor. However, puddle area data from cow houses is scarce. This is caused by the lack of appropriate measurement methods and the challenging measurement circumstances in the houses. In a preliminary study inside commercial dairy cow houses, an IR camera was successfully tested to distinguish a fresh urine puddle from its background to determine a puddle's area. The objective of this study was to further develop, improve and validate the IR camera method to determine the area of a warm fluid layer with a measurement uncertainty of 2. In a laboratory set-up, 90 artificial, warm, blue puddles were created, and both an IR and a colour image of each puddle was taken within 5 s after puddle application. For the colour images, three annotators determined the ground truth puddle areas (Ap,GT). For the IR images, an adaptive IR threshold algorithm was developed, based on the mean background temperature and the standard deviation of all temperature values in an image. This IR algorithm was able to automatically determine the IR puddle area (Ap,IR) in each IR image. The agreement between the two methods was assessed. The Ap,IR underestimated the Ap,GT by 2.53% for which is compensated by the model Ap,GT=1.0253·Ap,IR. This regression model intercepted with zero and the noise was only 0.0651 m2, so the measurement uncertainty was 2. In addition, the Ap,IR was not affected by the mean background temperature.</p
    corecore