15 research outputs found

    Automated zooplankton size measurement using deep learning: Overcoming the limitations of traditional methods

    Get PDF
    Zooplankton size is a crucial indicator in marine ecosystems, reflecting demographic structure, species diversity and trophic status. Traditional methods for measuring zooplankton size, which involve direct sampling and microscopic analysis, are laborious and time-consuming. In situ imaging systems are useful sampling tools; however, the variation in angles, orientations, and image qualities presented considerable challenges to early machine learning models tasked with measuring sizes.. Our study introduces a novel, efficient, and precise deep learning-based method for zooplankton size measurement. This method employs a deep residual network with an adaptation: replacing the fully connected layer with a convolutional layer. This modification allows for the generation of an accurate predictive heat map for size determination. We validated this automated approach against manual sizing using ImageJ, employing in-situ images from the PlanktonScope. The focus was on three zooplankton groups: copepods, appendicularians, and shrimps. An analysis was conducted on 200 individuals from each of the three groups. Our automated method's performance was closely aligned with the manual process, demonstrating a minimal average discrepancy of just 1.84%. This significant advancement presents a rapid and reliable tool for zooplankton size measurement. By enhancing the capacity for immediate and informed ecosystem-based management decisions, our deep learning-based method addresses previous challenges and opens new avenues for research and monitoring in zooplankton

    Jointly Feature Learning and Selection for Robust Tracking via a Gating Mechanism.

    No full text
    To achieve effective visual tracking, a robust feature representation composed of two separate components (i.e., feature learning and selection) for an object is one of the key issues. Typically, a common assumption used in visual tracking is that the raw video sequences are clear, while real-world data is with significant noise and irrelevant patterns. Consequently, the learned features may be not all relevant and noisy. To address this problem, we propose a novel visual tracking method via a point-wise gated convolutional deep network (CPGDN) that jointly performs the feature learning and feature selection in a unified framework. The proposed method performs dynamic feature selection on raw features through a gating mechanism. Therefore, the proposed method can adaptively focus on the task-relevant patterns (i.e., a target object), while ignoring the task-irrelevant patterns (i.e., the surrounding background of a target object). Specifically, inspired by transfer learning, we firstly pre-train an object appearance model offline to learn generic image features and then transfer rich feature hierarchies from an offline pre-trained CPGDN into online tracking. In online tracking, the pre-trained CPGDN model is fine-tuned to adapt to the tracking specific objects. Finally, to alleviate the tracker drifting problem, inspired by an observation that a visual target should be an object rather than not, we combine an edge box-based object proposal method to further improve the tracking accuracy. Extensive evaluation on the widely used CVPR2013 tracking benchmark validates the robustness and effectiveness of the proposed method

    Robust Individual-Cell/Object Tracking via PCANet Deep Network in Biomedicine and Computer Vision

    No full text
    Tracking individual-cell/object over time is important in understanding drug treatment effects on cancer cells and video surveillance. A fundamental problem of individual-cell/object tracking is to simultaneously address the cell/object appearance variations caused by intrinsic and extrinsic factors. In this paper, inspired by the architecture of deep learning, we propose a robust feature learning method for constructing discriminative appearance models without large-scale pretraining. Specifically, in the initial frames, an unsupervised method is firstly used to learn the abstract feature of a target by exploiting both classic principal component analysis (PCA) algorithms with recent deep learning representation architectures. We use learned PCA eigenvectors as filters and develop a novel algorithm to represent a target by composing of a PCA-based filter bank layer, a nonlinear layer, and a patch-based pooling layer, respectively. Then, based on the feature representation, a neural network with one hidden layer is trained in a supervised mode to construct a discriminative appearance model. Finally, to alleviate the tracker drifting problem, a sample update scheme is carefully designed to keep track of the most representative and diverse samples during tracking. We test the proposed tracking method on two standard individual cell/object tracking benchmarks to show our tracker's state-of-the-art performance

    Study on ammonium-promoted phenol-formaldehyde-based gel for enhanced oil recovery and its seepage characteristics in porous media

    No full text
    The phenol-formaldehyde crosslinker has low reactivity at low temperature, and is not easy to occur crosslinking reaction. In order to solve this problem, a kind of ammonium-promoted phenol-formaldehyde-based gel profile control agent was constructed which has the reactivity at low temperature. The basic formula is 0.30%~0.60%polymer, 0.01%resorcinol, 0.60% phenol-formaldehyde crosslinker and 0.10% ammonium salt. The results showed that the final gelling time was 6 days at 65°C, and the final gelling strength could reach G grade. In response to the problem that the plugging characteristics of ammonium-promoted phenol-formalde-hydebased gel are still unclear, the seepage characteristics of the phenol-formaldehyde-based gel in porous media are revealed by physical simulation of sand-pack. The research shows that the plugging rate of porous media with permeability of 2000 mD reaches 98%, the breakthrough pressure gradient is greater than 1.35 MPa/m

    The success plots for eight challenge attributes: illumination variation, deformation, out-of-plane rotation, in-plane rotation, scale variation, low resolution, occlusion, motion blur.

    No full text
    <p>The success plots for eight challenge attributes: illumination variation, deformation, out-of-plane rotation, in-plane rotation, scale variation, low resolution, occlusion, motion blur.</p

    Overview of the proposed CPGDN-based tracking method.

    No full text
    <p>Overview of the proposed CPGDN-based tracking method.</p

    Architecture of the two-layer CPGDN model [55] with a full connection layer.

    No full text
    <p>The first layer is CRBM, and the second layer is CPGBM with two mixture components. <i>z</i> is a gating mechanism and its value is binary variable. <i>z</i> and are complementary, i.e., . We use the first component of CPGBM as the input of a full connection layer.</p

    Quantitative comparison on the center distance error per frame for the four image sequences from [30].

    No full text
    <p>Quantitative comparison on the center distance error per frame for the four image sequences from [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0161808#pone.0161808.ref030" target="_blank">30</a>].</p

    Structured partial least squares for simultaneous object tracking and segmentation

    No full text
    Natural Science Foundation of China [61202299, 61373076, 61201359, 61202297, 61202468]; Natural Science Foundation of Fujian Province [2013J05092, 2013J01239]; China Postdoctoral Science Foundation [2011M501081]; Scientific Research Foundation of Huaqiao University [11BS109, 11BS213]; Fundamental Research Funds for the Central Universities [2013121026]Segmentation-based tracking methods are a class of powerful tracking methods that have been highly successful in alleviating model drift during online-learning of the trackers. These methods typically include a detection component and a segmentation component, in which the tracked objects are first located by detection; then the results from detection are used to guide the process of segmentation to reduce the noises in the training data. However, one of the limitations is that the processes of detection and segmentation are treated entirely separately. The drift from detection may affect the results of segmentation. This also aggravates the tracker's drift. In this paper, we propose a novel method to address this limitation by incorporating structured labeling information in the partial least square analysis algorithms for simultaneous object tracking and segmentation. This allows for novel structured labeling constraints to be placed directly on the tracked objects to provide useful contour constraint to alleviate the drifting problem. We show through both visual results and quantitative measurements on the challenging sequences that our method produces more robust tracking results while obtaining accurate object segmentation results. Crown Copyright (C) 2014 Published by Elsevier B.V. All rights reserved
    corecore