304 research outputs found

    Classification of bird species from video using appearance and motion features

    Get PDF
    The monitoring of bird populations can provide important information on the state of sensitive ecosystems; however, the manual collection of reliable population data is labour-intensive, time-consuming, and potentially error prone. Automated monitoring using computer vision is therefore an attractive proposition, which could facilitate the collection of detailed data on a much larger scale than is currently possible. A number of existing algorithms are able to classify bird species from individual high quality detailed images often using manual inputs (such as a priori parts labelling). However, deployment in the field necessitates fully automated in-flight classification, which remains an open challenge due to poor image quality, high and rapid variation in pose, and similar appearance of some species. We address this as a fine-grained classification problem, and have collected a video dataset of thirteen bird classes (ten species and another with three colour variants) for training and evaluation. We present our proposed algorithm, which selects effective features from a large pool of appearance and motion features. We compare our method to others which use appearance features only, including image classification using state-of-the-art Deep Convolutional Neural Networks (CNNs). Using our algorithm we achieved a 90% correct classification rate, and we also show that using effectively selected motion and appearance features together can produce results which outperform state-of-the-art single image classifiers. We also show that the most significant motion features improve correct classification rates by 7% compared to using appearance features alone

    Automatic classification of flying bird species using computer vision techniques [forthcoming]

    Get PDF
    Bird populations are identified as important biodiversity indicators, so collecting reliable population data is important to ecologists and scientists. However, existing manual monitoring methods are labour-intensive, time-consuming, and potentially error prone. The aim of our work is to develop a reliable automated system, capable of classifying the species of individual birds, during flight, using video data. This is challenging, but appropriate for use in the field, since there is often a requirement to identify in flight, rather than while stationary. We present our work, which uses a new and rich set of appearance features for classification from video. We also introduce motion features including curvature and wing beat frequency. Combined with Normal Bayes classifier and a Support Vector Machine classifier, we present experimental evaluations of our appearance and motion features across a data set comprising 7 species. Using our appearance feature set alone we achieved a classification rate of 92% and 89% (using Normal Bayes and SVM classifiers respectively) which significantly outperforms a recent comparable state-of-the-art system. Using motion features alone we achieved a lower-classification rate, but motivate our on-going work which we seeks to combine these appearance and motion feature to achieve even more robust classification

    Detecting abnormal fish trajectories using clustered and labeled data

    Get PDF
    We propose an approach for the analysis of fish trajectories in unconstrained underwater videos. Trajectories are classified into two classes: normal trajectories which contain the usual behavior of fish and abnormal trajectories which indicate the behaviors that are not as common as the normal class. The paper presents two innovations: 1) a novel approach to abnormal trajectory detection and 2) improved performance on video based abnormal trajectory analysis of fish in unconstrained conditions. First we extract a set of features from trajectories and apply PCA. We then perform clustering on a subset of features. Based on the clustering, outlier detection is applied to each cluster. Improved results are obtained which is significant considering the challenges of underwater environments, low video quality, and erratic movement of fish

    Detection of Abnormal Fish Trajectories Using a Clustering Based Hierarchical Classifier

    Get PDF
    We address the analysis of fish trajectories in unconstrained underwater videos to help marine biologist to detect new/rare fish behaviours and to detect environmental changes which can be observed from the abnormal behaviour of fish. The fish trajectories are separated into normal and abnormal classes which indicate the common behaviour of fish and the behaviours that are rare / unusual respectively. The proposed solution is based on a novel type of hierarchical classifier which builds the tree using clustered and labelled data based on similarity of data while using different feature sets at different levels of hierarchy. The paper presents a new method for fish trajectory analysis which has better performance compared to state-of-the-art techniques while the results are significant considering the challenges of underwater environments, low video quality, erratic movement of fish and highly imbalanced trajectory data that we used. Moreover, the proposed method is also powerful enough to classify highly imbalanced real-world datasets.

    2019 Undergraduate Research Competition Program

    Get PDF
    11th Annual Undergraduate Research Competition, April 16-17, 2019.https://digitalcommons.coastal.edu/ugrc-programs/1010/thumbnail.jp

    Multimodal headpose estimation and applications

    Get PDF
    This thesis presents new research into human headpose estimation and its applications in multi-modal data. We develop new methods for head pose estimation spanning RGB-D Human Computer Interaction (HCI) to far away "in the wild" surveillance quality data. We present the state-of-the-art solution in both head detection and head pose estimation through a new end-to-end Convolutional Neural Network architecture that reuses all of the computation for detection and pose estimation. In contrast to prior work, our method successfully spans close up HCI to low-resolution surveillance data and is cross modality: operating on both RGB and RGB-D data. We further address the problem of limited amount of standard data, and different quality of annotations by semi supervised learning and novel data augmentation. (This latter contribution also finds application in the domain of life sciences.) We report the highest accuracy by a large margin: 60% improvement; and demonstrate leading performance on multiple standardized datasets. In HCI we reduce the angular error by 40% relative to the previous reported literature. Furthermore, by defining a probabilistic spatial gaze model from the head pose we show application in human-human, human-scene interaction understanding. We present the state-of-the art results on the standard interaction datasets. A new metric to model "social mimicry" through the temporal correlation of the headpose signal is contributed and shown to be valid qualitatively and intuitively. As an application in surveillance, it is shown that with the robust headpose signal as a prior, state-of-the-art results in tracking under occlusion using a Kalman filter can be achieved. This model is named the Intentional Tracker and it improves visual tracking metrics by up to 15%. We also apply the ALICE loss that was developed for the end-to-end detection and classification, to dense classiffication of underwater coral reefs imagery. The objective of this work is to solve the challenging task of recognizing and segmenting underwater coral imagery in the wild with sparse point-based ground truth labelling. To achieve this, we propose an integrated Fully Convolutional Neural Network (FCNN) and Fully-Connected Conditional Random Field (CRF) based classification and segmentation algorithm. Our major contributions lie in four major areas. First, we show that multi-scale crop based training is useful in learning of the initial weights in the canonical one class classiffication problem. Second, we propose a modified ALICE loss for training the FCNN on sparse labels with class imbalance and establish its signi cance empirically. Third we show that by arti cially enhancing the point labels to small regions based on class distance transform, we can improve the classification accuracy further. Fourth, we improve the segmentation results using fully connected CRFs by using a bilateral message passing prior. We improve upon state-of-the-art results on all publicly available datasets by a significant margin

    A review on the use of computer vision and artificial intelligence for fish recognition, monitoring, and management.

    Get PDF
    Abstract: Computer vision has been applied to fish recognition for at least three decades. With the inception of deep learning techniques in the early 2010s, the use of digital images grew strongly, and this trend is likely to continue. As the number of articles published grows, it becomes harder to keep track of the current state of the art and to determine the best course of action for new studies. In this context, this article characterizes the current state of the art by identifying the main studies on the subject and briefly describing their approach. In contrast with most previous reviews related to technology applied to fish recognition, monitoring, and management, rather than providing a detailed overview of the techniques being proposed, this work focuses heavily on the main challenges and research gaps that still remain. Emphasis is given to prevalent weaknesses that prevent more widespread use of this type of technology in practical operations under real-world conditions. Some possible solutions and potential directions for future research are suggested, as an effort to bring the techniques developed in the academy closer to meeting the requirements found in practice

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and
    • …
    corecore