103,232 research outputs found

    Catfish Fry Detection and Counting Using YOLO Algorithm

    Get PDF
    The development of computer vision technology is growing very fast and penetrating all sectors, including fisheries. This research focuses on detecting and counting catfish fry. This research aims to apply deep learning in detecting catfish fry objects and counting accurately so as to help farmers and buyers reduce the risk of loss.  The detection system in this research uses digital image processing techniques as a way to obtain information from the detection object. The research method uses YOLO Object Detection which has a very fast ability to identify objects. The object detected is a catfish puppy object that is given a bounding box and the detection label displays the class name and precision value. The dataset amounted to 321 images of catfish puppies from internet and photography sources that were trained to produce a new digital image model. The number of split training, validation and testing datasets is worth 831 annotation images, 83 validation images and 83 images for the testing process. The value of the training model mAP 50.39 %, Precision 61.17 % and Recall 58 %  Detection test results based on the YOLO method obtained an accuracy rate of 65.7%. The avg loss value in the final model built with YOLO is 4.6%. Based on the results of tests carried out with the number of objects 50 to 500 tail size 2-8 cm using video, objects in the image are successfully recognized with an accuracy of 63% to 70%. Calculations using the YOLO algorithm show quite good results.The development of computer vision technology is growing very fast and penetrating all sectors, including fisheries. This research focuses on detecting and counting catfish fry. This research aims to apply deep learning in detecting catfish fry objects and counting accurately so as to help farmers and buyers reduce the risk of loss.  The detection system in this research uses digital image processing techniques as a way to obtain information from the detection object. The research method uses YOLO Object Detection which has a very fast ability to identify objects. The object detected is a catfish puppy object that is given a bounding box and the detection label displays the class name and precision value. The dataset amounted to 321 images of catfish puppies from internet and photography sources that were trained to produce a new digital image model. The number of split training, validation and testing datasets is worth 831 annotation images, 83 validation images and 83 images for the testing process. The value of the training model mAP 50.39 %, Precision 61.17 % and Recall 58 %  Detection test results based on the YOLO method obtained an accuracy rate of 65.7%. The avg loss value in the final model built with YOLO is 4.6%. Based on the results of tests carried out with the number of objects 50 to 500 tail size 2-8 cm using video, objects in the image are successfully recognized with an accuracy of 63% to 70%. Calculations using the YOLO algorithm show quite good results

    Building extraction from satellite imagery using a digital surface model

    Full text link
    In this paper, two approaches to building extraction from satellite imagery and height data obtained from stereo images or LIDAR are compared. The first approach consists of detecting high-rise objects in a digital surface model and then improving recognition accuracy using segmentation of spectral information. The second approach uses the U-Net convolutional neural network, which showed the best results for the extraction of objects from aerospace images on a number of large datasets. Extensive experiments were carried out to evaluate the dependence of the quality of U-Net-based building extraction on the different data types (including high-resolution satellite images and digital surface model data). Building extraction quality of the trained network was also evaluated on satellite images with different spatial resolutions. © 2018 CEUR-WS. All rights reserved

    A method to improve corner detectors (Harris, Shi-Tomasi & FAST) using adaptive contrast enhancement filter

    Get PDF
    A method to improve interested-points detectors in an image that suffers from the problem of illumination was conducted in this paper. Three algorithms are adopted based on Harris, Shi-Tomasi, and FAST algorithms to identify the interested-points in images that are required to match, recognize and track objects in the digital images. Detecting the interested-points in images with bad illumination is one of the most challenging tasks in the field of image processing. The illumination is considered as one of the main causes of damage of the natural images during the acquisition and transition. Detecting the interested-points of these images doesn’t give the desired results, which is why handling this problem for those images is very important. The Adaptive Contrast Enhancement Filter approach is applied for solving this problem

    Classifying Aspect Ratios Of Images

    Get PDF
    Disclosed herein is a mechanism for detecting and/or adjusting aspect ratio in images and videos. For example, the mechanism can include analyzing a digital image having pixels by identifying sets of pixels corresponding to one or more objects in the digital image, analyzing the set of pixels corresponding to at least one object in the image to determine an estimated image aspect ratio, and applying an image transform to the digital image based on the estimated image aspect ratio to generate a modified digital image. In a more particular example, the one or more objects can correspond to faces, animals, logos, signs, or vehicles. In another example, the mechanisms can include generating a quality rating for the digital image based on the estimated image aspect ratio

    DESIGN OF DIGITAL IMAGE EDGE DETECTION APPLICATIONS USING FREI-CHEN ALGORITHM

    Get PDF
    One image processing technique used is edge detection. Edge detection is a common thing in digital image processing because it is one of the first steps in image segmentation, which aims to present the objects contained in the image. Edge detection functions to identify the boundary lines of an object against overlapping backgrounds. Currently several methods can be used for edge detection, for example the Sobel, Canny, Prewitt, Frei-Chen, and SUSAN methods. In this research, 1 method is taken, namely the Frei-Chen algorithm. The results of this study indicate that this operator is successful in detecting edges in an image. When detecting edges in images containing noise the Frei-Chen algorithm is better at edge detection

    Perimeter detection in sketched drawings of polyhedral shapes

    Get PDF
    Ponència presentada al STAG17: Smart tools and Applications in Graphics, celebrat a Catania (Itàlia) 11-12 setembre 2017This paper describes a new “envelope” approach for detecting object perimeters in line-drawings vectorised from sketches of polyhedral objects. Existing approaches for extracting contours from digital images are unsuitable for Sketch-Based Modelling, as they calculate where the contour is, but not which elements of the line-drawing belong to it. In our approach, the perimeter is described in terms of lines and junctions (including intersections and T-junctions) of the original line drawing

    Object Detection from a Vehicle Using Deep Learning Network and Future Integration with Multi-Sensor Fusion Algorithm

    Get PDF
    Accuracy in detecting a moving object is critical to autonomous driving or advanced driver assistance systems (ADAS). By including the object classification from multiple sensor detections, the model of the object or environment can be identified more accurately. The critical parameters involved in improving the accuracy are the size and the speed of the moving object. All sensor data are to be used in defining a composite object representation so that it could be used for the class information in the core object’s description. This composite data can then be used by a deep learning network for complete perception fusion in order to solve the detection and tracking of moving objects problem. Camera image data from subsequent frames along the time axis in conjunction with the speed and size of the object will further contribute in developing better recognition algorithms. In this paper, we present preliminary results using only camera images for detecting various objects using deep learning network, as a first step toward multi-sensor fusion algorithm development. The simulation experiments based on camera images show encouraging results where the proposed deep learning network based detection algorithm was able to detect various objects with certain degree of confidence. A laboratory experimental setup is being commissioned where three different types of sensors, a digital camera with 8 megapixel resolution, a LIDAR with 40m range, and ultrasonic distance transducer sensors will be used for multi-sensor fusion to identify the object in real-time
    corecore