3,745 research outputs found

    Long-term underwater camera surveillance for monitoring and analysis of fish populations

    Get PDF
    Long-term monitoring of the underwater environment is still labour intensive work. Using underwater surveillance cameras to monitor this environment has the potential advantage to make the task become less labour intensive. Also, the obtained data can be stored making the research reproducible. In this work, a system to analyse long-term underwater camera footage (more than 3 years of 12 hours a day underwater camera footage from 10 cameras) is described. This system uses video processing software to detect and recognise ïŹsh species. This footage is processed on supercomputers, which allow marine biologists to request automatic processing on these videos and afterwards analyse the results using a web-interface that allows them to display counts of ïŹsh species in the camera footage

    Exploring the potential of deep learning models for fish classification

    Get PDF
    In this thesis we have studied and applied one of the recently proposed deep learning architecture, Vision transformer (ViT). We have observed the performance of ViT model under conditions like with and without transfer learning, with and without image augmentation under three different publicly available datasets. We have also observed the performance of other two popular deep neural network models like VGG16 and Inception V3 under same conditions and same three datasets. In overall comparisons, ViT showed excellent performance and can be proposed for fish image classification

    Field Programmable Gate Array (FPGA) Based Fish Detection Using Haar Classifiers

    Get PDF
    The quantification of abundance, size, and distribution of fish is critical to properly manage and protect marine ecosystems and regulate marine fisheries. Currently, fish surveys are conducted using fish tagging, scientific diving, and/or capture and release methods (i.e., net trawls), methods that are both costly and time consuming. Therefore, providing an automated way to conduct fish surveys could provide a real benefit to marine managers. In order to provide automated fish counts and classification we propose an automated fish species classification system using computer vision. This computer vision system can count and classify fish found in underwater video images using a classification method known as Haar classification. We have partnered with the Birch Aquarium to obtain underwater images of a variety of fish species, and present in this paper the implementation of our vision system and its detection results for our first test species, the Scythe Butterfly fish, subject of the Birch Aquarium logo

    Fish species recognition using transfer learning techniques

    Get PDF
    Marine species recognition is the process of identifying various species that help in population estimation and identifying the endangered types for taking further remedies and actions. The superior performance of deep learning for classification is due to the property of estimating millions of parameters that have to be extracted from many annotated datasets. However, many types of fish species are becoming extinct, which may reduce the number of samples. The unavailability of a large dataset is a significant hurdle for applying a deep neural network that can be overcome using transfer learning techniques. To overcome this problem, we propose a transfer learning technique using a pre-trained model that uses underwater fish images as input and applies a transfer learning technique to detect the fish species using a pre-trained Google Inception-v3 model. We have evaluated our proposed method on the Fish4knowledge(F4K) dataset and obtained an accuracy of 95.37%. The research would be helpful to identify fish existence and quantity for marine biologists to understand the underwater environment to encourage its preservation and study the behavior and interactions of marine animals

    Multifactorial Uncertainty Assessment for Monitoring Population Abundance using Computer Vision

    Get PDF
    Computer vision enables in-situ monitoring of animal populations at a lower cost and with less ecosystem disturbance than with human observers. However, computer vision uncertainty may not be fully understood by end-users, and the uncertainty assessments performed by technology experts may not fully address end-user needs. This knowledge gap can yield misinterpretations of computer vision data, and trust issues impeding the transfer of valuable technologies. We bridge this gap with a user-centered analysis of the uncertainty issues. Key uncertainty factors, and their interactions, are identified from the perspective of a core task in ecology research and beyond: counting individuals from different classes. We highlight factors for which uncertainty assessment methods are currently unavailable. The remaining uncertainty assessment methods are not interoperable. Hence it is currently difficult to assess the combined results of multiple uncertainty factors, and their impact on end-user counting tasks. We propose a framework for assessing the multifactorial uncertainty propagation along the data processing pipeline. It integrates methods from both computer vision and ecology domains, and aims at supporting the statistical analysis of abundance trends for population monitoring. Our typology of uncertainty factors and our assessment methods were drawn from interviews with marine ecology and computer vision experts, and from prior work for a fish monitoring application. Our findings contribute to enabling scientific research based on computer vision

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and

    LifeCLEF 2016: Multimedia Life Species Identification Challenges

    Get PDF
    International audienceUsing multimedia identification tools is considered as one of the most promising solutions to help bridge the taxonomic gap and build accurate knowledge of the identity, the geographic distribution and the evolution of living species. Large and structured communities of nature observers (e.g., iSpot, Xeno-canto, Tela Botanica, etc.) as well as big monitoring equipment have actually started to produce outstanding collections of multimedia records. Unfortunately, the performance of the state-of-the-art analysis techniques on such data is still not well understood and is far from reaching real world requirements. The LifeCLEF lab proposes to evaluate these challenges around 3 tasks related to multimedia information retrieval and fine-grained classification problems in 3 domains. Each task is based on large volumes of real-world data and the measured challenges are defined in collaboration with biologists and environmental stakeholders to reflect realistic usage scenarios. For each task, we report the methodology, the data sets as well as the results and the main outcom

    Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model

    Full text link
    This paper proposes a robust deformable adaptive 2D model, based on computer vision methods, that automatically fits the body (ventral silhouette) of Bluefin tuna while swimming. Our model (without human intervention) adjusts to fish shape and size, obtaining fish orientation, bending to fit their flexion motion and has proved robust enough to overcome possible segmentation inaccuracies. Once the model has been successfully fitted to the fish it can ensure that the detected object is a tuna and not parts of fish or other objects. Automatic requirements of the fishing industry like biometric measurement, specimen counting or catch biomass estimation could then be addressed using a stereoscopic system and meaningful information extracted from our model. We also introduce a fitting procedure based on a fitting parameter - Fitting Error Index (FEI) - which permits us to know the quality of the results. In the experiments our model has achieved very high success rates (up to 90%) discriminating individuals in highly complex images acquired for us in real conditions in the Mediterranean Sea. Conclusions and future improvements to the proposed model are also discussed.This work was partially supported by the EU Commission [2013/410/EU] (BIACOP project). We acknowledge funding of ACUSTUNA project ref. CTM2015-70446-R (MINECO/FEDER, UE).Atienza-Vanacloig, V.; Andreu GarcĂ­a, G.; LĂłpez GarcĂ­a, F.; Valiente GonzĂĄlez, JM.; Puig Pons, V. (2016). Vision-based discrimination of tuna individuals in grow-out cages through a fish bending model. Computers and Electronics in Agriculture. 130:142-150. https://doi.org/10.1016/j.compag.2016.10.009S14215013

    Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    Get PDF
    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches
    • 

    corecore