Semantic Segmentation for Posidonia Oceanica Coverage Estimation

Abstract

One method of assessing the ecological status of seagrass is the analysis of videographic images for variables such as total aerial cover. Georeferenced images can be collected and matched by location over time, and any changes in coverage can be compared statistically to the expected null hypothesis. Since the manual analysis of large datasets approaching over a million images is not feasible, automated methods are necessary. Because of the wide variation in underwater conditions affecting light transmission and reflection, including biological conditions, deep learning methods are necessary to distinguish seagrass from non-seagrass portions of images. Using deep semantic segmentation, we evaluated several deep neural network architectures, and found that the best performer is the DeepLabv3Plus network, at close to 88% (intersection over union). We conclude that the deep learning method is more accurate and many times faster than human annotation. This method can now be used for scoring of large image datasets for seagrass discrimination and cover estimates. Our code is available on GitHub: https://enviewfulda.github.io/LookingForSeagrassSematicSegmentatio

    Similar works