2 research outputs found

    Robust deep learning based shrimp counting in an industrial farm setting

    Get PDF
    Shrimp production is one of the fastest growing sectors in the aquaculture industry. Despite extensive research in recent years, stocking densities in shrimp systems still depend on manual sampling which is neither time nor cost efficient and additionally challenges shrimp welfare. This paper compares the performance of automatic shrimp counting solutions for commercial Recirculating Aquaculture System (RAS) based farming systems, using eight Deep Learning based methods. The entire dataset includes 1379 images of shrimps in RAS farming tanks, taken at a distance using an iPhone 11 mini. These were manually annotated, with bounding boxes for every clearly visible shrimp. The dataset was partitioned into training (60 %, 828 samples), validation (20 %, 276 samples) and test (20 %, 275 samples) splits for purposes of training and evaluating the models. The present work demonstrates that state-of-the-art object detection models outperform manual counting and achieve high performance across the entire production range and at various circumstances known to be challenging for object detection (dim light, overlapping and small animals, various acquisition devices and image resolutions and camera distance to object). Highest counting performance was obtained with models based on YOLOv5m6 and Faster R–CNN (as opposed to neural network autoencoder architecture to estimate a density map). The best model generalizes well on an independent test set and even shows promising performance when tested with different taxa. The model performs best at densities below 200 shrimps per image with an overall error of 5.97 %. It is assumed that this performance can be improved by increasing the dataset size, especially with images at high shrimp stocking density, and it is strongly believed that a performance below the 5 % error threshold is close to being achieved, which will allow for deployment of the model in an industrial setting

    Comparison of Three Off-the-Shelf Visual Odometry Systems

    No full text
    Positioning is an essential aspect of robot navigation, and visual odometry an important technique for continuous updating the internal information about robot position, especially indoors without GPS (Global Positioning System). Visual odometry is using one or more cameras to find visual clues and estimate robot movements in 3D relatively. Recent progress has been made, especially with fully integrated systems such as the RealSense T265 from Intel, which is the focus of this article. We compare between each other three visual odometry systems (and one wheel odometry, as a known baseline), on a ground robot. We do so in eight scenarios, varying the speed, the number of visual features, and with or without humans walking in the field of view. We continuously measure the position error in translation and rotation thanks to a ground truth positioning system. Our result shows that all odometry systems are challenged, but in different ways. The RealSense T265 and the ZED Mini have comparable performance, better than our baseline ORB-SLAM2 (mono-lens without inertial measurement unit (IMU)) but not excellent. In conclusion, a single odometry system might still not be sufficient, so using multiple instances and sensor fusion approaches are necessary while waiting for additional research and further improved products
    corecore