4 research outputs found

    Efficient Weed Segmentation with Reduced Residual U-Net using Depth-wise Separable Convolution Network

    Get PDF
    482-494Selective weed treatment is a cost-effective method that reduces manpower and usage of the agrochemical, at the same time it requires an effective computer vision system to identify weeds and should be smaller in size to run in resource-constrained devices. To accomplish this, a convolution neural network named Reduced Residual U-Net using Depth-wise separable Convolution (RRUDC) network is proposed in this paper. Residual Depth-wise separable Convolution Block (RDCB) is introduced as a functional unit in both contractive and expanding paths. Residual connection is incorporated inside every RDCB unit. This network employs semantic segmentation to analyze the crop field images pixel-wise. To reduce the parameter size, a depth-wise separable convolution technique is used which curtail the number of parameters generated by the model at a ~1/9 ratio with a very negligible drop in accuracy. The model is trained using Crop Weed Field Image Dataset (CWFID) and then the trained model is pruned to reduce the model size further. It compresses the final model size by around ~70% without affecting the performance. It has achieved segmentation accuracy of ~96%, a lesser error rate with a model size less than 3 MB. It can be compatible with converting the proposed deep learning model into a real-time computer vision application that seems more convenient for farmers in their resource-constrained devices on their agricultural land

    Enhancement and Detection of Objects in Underwater Images using Image Super-resolution and Effective Object Detection Model

    Get PDF
    It is imperative to build an automatic underwater object recognition system in place to reduce the costs of underwater inspections as well as the associated risks. An effective method of detecting underwater objects from underwater images of aquatic after enhancing them using the Image Super-resolution technique is proposed in this study. The proposed approach comprises of two major sections, Underwater Image Enhancement, and Object detection. To enhance the underwater images, a lightweight Reduced Cascading Residual Network (RCARN) is proposed that imposes the Image Super-resolution technique. Later, the enhanced images generated by the RCARN model are supplied for the object detection process, where a significant object detection model, YOLOv3 is employed in this study. To improve its performance, this YOLOv3 is trained on one of the largest datasets, the COCO data, followed by being fine-tuned using enhanced Underwater images. The dataset utilized in this work contains 6 classes of underwater objects namely dolphin, jellyfish, octopus, seahorse, starfish, and turtle. All these images are actual real field images collected from various sources. With this proposed approach, a better overall ACS and mAP of 95.44% and 75.33% are achieved here, which are improved by ~8.75% and ~15%, respectively when compared to actual collected low-resolution images

    Efficient Weed Segmentation with Reduced Residual U-Net using Depth-wise Separable Convolution Network

    Get PDF
    Selective weed treatment is a cost-effective method that reduces manpower and usage of the agrochemical, at the same time it requires an effective computer vision system to identify weeds and should be smaller in size to run in resource-constrained devices. To accomplish this, a convolution neural network named Reduced Residual U-Net using Depth-wise separable Convolution (RRUDC) network is proposed in this paper. Residual Depth-wise separable Convolution Block (RDCB) is introduced as a functional unit in both contractive and expanding paths. Residual connection is incorporated inside every RDCB unit. This network employs semantic segmentation to analyze the crop field images pixel-wise. To reduce the parameter size, a depth-wise separable convolution technique is used which curtail the number of parameters generated by the model at a ~1/9 ratio with a very negligible drop in accuracy. The model is trained using Crop Weed Field Image Dataset (CWFID) and then the trained model is pruned to reduce the model size further. It compresses the final model size by around ~70% without affecting the performance. It has achieved segmentation accuracy of ~96%, a lesser error rate with a model size less than 3 MB. It can be compatible with converting the proposed deep learning model into a real-time computer vision application that seems more convenient for farmers in their resource-constrained devices on their agricultural land

    Enhancement and Detection of Objects in Underwater Images using Image Super-resolution and Effective Object Detection Model

    No full text
    1050-1060It is imperative to build an automatic underwater object recognition system in place to reduce the costs of underwater inspections as well as the associated risks. An effective method of detecting underwater objects from underwater images of aquatic after enhancing them using the Image Super-resolution technique is proposed in this study. The proposed approach comprises of two major sections, Underwater Image Enhancement, and Object detection. To enhance the underwater images, a lightweight Reduced Cascading Residual Network (RCARN) is proposed that imposes the Image Super-resolution technique. Later, the enhanced images generated by the RCARN model are supplied for the object detection process, where a significant object detection model, YOLOv3 is employed in this study. To improve its performance, this YOLOv3 is trained on one of the largest datasets, the COCO data, followed by being fine-tuned using enhanced Underwater images. The dataset utilized in this work contains 6 classes of underwater objects namely dolphin, jellyfish, octopus, seahorse, starfish, and turtle. All these images are actual real field images collected from various sources. With this proposed approach, a better overall ACS and mAP of 95.44% and 75.33% are achieved here, which are improved by ~8.75% and ~15%, respectively when compared to actual collected low-resolution images
    corecore