11 research outputs found

    Typical vial images illustrating varying lighting conditions and irregularities in media surface.

    No full text
    <p>Original vial image (A) and egg density representation from QuantiFly (B) for transparent defined medium (DM). Original vial image (C) and egg density representation (D) for opaque sugar/yeast (SY) medium. Red arrows highlight artefacts present in images: (1) clumped eggs; (2) marks on vial base; (3) bubble artefacts in media; (4) specular reflection from vial surface, and; (5) clumped eggs on food surface. Colour bar represents pixel density estimate of an egg.</p

    Performance of algorithm on defined media and SY-media datasets.

    No full text
    <p>Datasets contain 8 vial images. A Leave-one-out cross-validation strategy was performed for each vial (7 in and 1 out). Standard error was generated from n = 5 trials (statistical replicates). QuantiFly Accuracy represents the baseline algorithm with the bias correction.</p><p>Performance of algorithm on defined media and SY-media datasets.</p

    Schematic illustration of training and evaluation modes of QuantiFly software.

    No full text
    <p>(Left) Training mode: steps required to train a QuantiFly model to recognise eggs in an image scene. (Right) Evaluation mode: steps involved with evaluating a bulk number of images with a pre-trained model. Blue circles depict points at which a user must provide information to the system, either specifying input/output locations of files or through labelling eggs in images.</p

    Comparison of QuantiFly performance when compared to human counter.

    No full text
    <p>Digital images were captured for four nutritionally different transparent media (A; C1-C4) and 4 different opaque media (B; D1-4). Estimates of the eggs in each vial were compared for the following methods: automated counts from QuantiFly algorithm; manual counts from a human and a digital on-screen ground-truth count (grey). (C) Image of opaque media vial with densely clustered eggs, red arrow 2 shows region with high-level of clustering. Error bars represent standard error of differences in vial densities in each condition (C1-4, n = 8; D1-4, n = 5 vials per condition).</p

    Performance of QuantiFly on transparent and opaque media compared to human manual counts and digital ground-truth counts for each dataset.

    No full text
    <p>Q:QuantiFly prediction, M: Manual human counts, D: Digital ground-truth counts. Pairwise: Tukeys pairwise comparison; Fold Difference; the fold difference between counts; Correlation, Pearsons correlation coefficient for each comparison.</p><p>Performance of QuantiFly on transparent and opaque media compared to human manual counts and digital ground-truth counts for each dataset.</p

    Sample size calculation.

    No full text
    <p>(A) The number of replicates required to achieve a confidence interval of below 0.05 was calculated for the manual human count compared to the QuantiFly software using population standard deviation calculated from Fig <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0127659#pone.0127659.g005" target="_blank">5A</a> and <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0127659#pone.0127659.g005" target="_blank">5C</a>. Plot represents the number of replicates required to separate conditions which differ by 1.1 fold. (B) Projected time requirements for counting vials for a single condition using the existing manual approach or the QuantiFly software.</p

    Characterisation of the quantity of training material required to achieve high prediction accuracy with QuantiFly software.

    No full text
    <p>The QuantiFly software accuracy was compared on transparent and opaque media datasets after training with 1–7 training images. (A) Average accuracy of algorithm when compared to digital ground-truth for transparent media datasets (a-e). (B) Average number of eggs labelled for each level of training for transparent media dataset. (C) Average accuracy of algorithm when compared to digital ground-truth for opaque media datasets (f-j) (D) Average number of eggs labelled for each level of training for opaque media dataset. Each level of training was performed on every image in the dataset and repeated 5 x. for each dataset and the accuracy averaged across all data. Error bars are SE. Students t-test paired two-way analysis was performed on data (p<0.05)</p

    Detection pipeline for search-phase bat echolocation calls.

    No full text
    <p>(a) Raw audio files are converted into a spectrogram using a Fast Fourier Transform (b). Files are de-noised (c), and a sliding window Convolutional Neural Network (CNN) classifier (d, yellow box) produces a probability for each time step. Individual call detection probabilities using non-maximum suppression are produced (e, green boxes), and the time in file of each prediction along with the classifier probability are exported as text files.</p

    Spatial distribution of the BatDetect CNNs training and testing datasets.

    No full text
    <p>(a) Location of training data for all experiments and one test dataset in Romania and Bulgaria (2006–2011) from time-expanded (TE) data recorded along road transects by the Indicator Bats Programme (iBats) [<a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1005995#pcbi.1005995.ref007" target="_blank">7</a>], where red and black points represent training and test data, respectively. (b) Locations of additional test datasets from TE data recorded as part of iBats car transects in the UK (2005–2011), and from real-time recordings from static recorders from the Norfolk Bat Survey from 2015 (inset). Points represent the start location of each snapshot recording for each iBats transect or locations of static detectors for the Norfolk Bat Survey.</p
    corecore