1,126 research outputs found

    Underwater Fish Detection with Weak Multi-Domain Supervision

    Full text link
    Given a sufficiently large training dataset, it is relatively easy to train a modern convolution neural network (CNN) as a required image classifier. However, for the task of fish classification and/or fish detection, if a CNN was trained to detect or classify particular fish species in particular background habitats, the same CNN exhibits much lower accuracy when applied to new/unseen fish species and/or fish habitats. Therefore, in practice, the CNN needs to be continuously fine-tuned to improve its classification accuracy to handle new project-specific fish species or habitats. In this work we present a labelling-efficient method of training a CNN-based fish-detector (the Xception CNN was used as the base) on relatively small numbers (4,000) of project-domain underwater fish/no-fish images from 20 different habitats. Additionally, 17,000 of known negative (that is, missing fish) general-domain (VOC2012) above-water images were used. Two publicly available fish-domain datasets supplied additional 27,000 of above-water and underwater positive/fish images. By using this multi-domain collection of images, the trained Xception-based binary (fish/not-fish) classifier achieved 0.17% false-positives and 0.61% false-negatives on the project's 20,000 negative and 16,000 positive holdout test images, respectively. The area under the ROC curve (AUC) was 99.94%.Comment: Published in the 2019 International Joint Conference on Neural Networks (IJCNN-2019), Budapest, Hungary, July 14-19, 2019, https://www.ijcnn.org/ , https://ieeexplore.ieee.org/document/885190

    Understanding High Resolution Aerial Imagery Using Computer Vision Techniques

    Get PDF
    Computer vision can make important contributions to the analysis of remote sensing satellite or aerial imagery. However, the resolution of early satellite imagery was not sufficient to provide useful spatial features. The situation is changing with the advent of very-high-spatial-resolution (VHR) imaging sensors. This change makes it possible to use computer vision techniques to perform analysis of man-made structures. Meanwhile, the development of multi-view imaging techniques allows the generation of accurate point clouds as ancillary knowledge. This dissertation aims at developing computer vision and machine learning algorithms for high resolution aerial imagery analysis in the context of application problems including debris detection, building detection and roof condition assessment. High resolution aerial imagery and point clouds were provided by Pictometry International for this study. Debris detection after natural disasters such as tornadoes, hurricanes or tsunamis, is needed for effective debris removal and allocation of limited resources. Significant advances in aerial image acquisition have greatly enabled the possibilities for rapid and automated detection of debris. In this dissertation, a robust debris detection algorithm is proposed. Large scale aerial images are partitioned into homogeneous regions by interactive segmentation. Debris areas are identified based on extracted texture features. Robust building detection is another important part of high resolution aerial imagery understanding. This dissertation develops a 3D scene classification algorithm for building detection using point clouds derived from multi-view imagery. Point clouds are divided into point clusters using Euclidean clustering. Individual point clusters are identified based on extracted spectral and 3D structural features. The inspection of roof condition is an important step in damage claim processing in the insurance industry. Automated roof condition assessment from remotely sensed images is proposed in this dissertation. Initially, texture classification and a bag-of-words model were applied to assess the roof condition using features derived from the whole rooftop. However, considering the complexity of residential rooftop, a more sophisticated method is proposed to divide the task into two stages: 1) roof segmentation, followed by 2) classification of segmented roof regions. Deep learning techniques are investigated for both segmentation and classification. A deep learned feature is proposed and applied in a region merging segmentation algorithm. A fine-tuned deep network is adopted for roof segment classification and found to achieve higher accuracy than traditional methods using hand-crafted features. Contributions of this study include the development of algorithms for debris detection using 2D images and building detection using 3D point clouds. For roof condition assessment, the solutions to this problem are explored in two directions: features derived from the whole rooftop and features extracted from each roof segments. Through our research, roof segmentation followed by segments classification was found to be a more promising method and the workflow processing developed and tested. Deep learning techniques are also investigated for both roof segmentation and segments classification. More unsupervised feature extraction techniques using deep learning can be explored in future work

    Dual-wavelength thulium fluoride fiber laser based on SMF-TMSIF-SMF interferometer as potential source for microwave generationin 100-GHz region

    Get PDF
    A dual-wavelength thulium-doped fluoride fiber (TDFF) laser is presented. The generation of the TDFF laser is achieved with the incorporation of a single modemultimode- single mode (SMS) interferometer in the laser cavity. The simple SMS interferometer is fabricated using the combination of two-mode step index fiber and single-mode fiber. With this proposed design, as many as eight stable laser lines are experimentally demonstrated. Moreover, when a tunable bandpass filter is inserted in the laser cavity, a dual-wavelength TDFF laser can be achieved in a 1.5-μm region. By heterodyning the dual-wavelength laser, simulation results suggest that the generated microwave signals can be tuned from 105.678 to 106.524 GHz with a constant step of �0.14 GHz. The presented photonics-based microwave generation method could provide alternative solution for 5G signal sources in 100-GHz region

    Quantifying marine macro litter abundance on a sandy beach using unmanned aerial systems and object-oriented machine learning methods

    Get PDF
    UIDB 00308/2020 REM/30324/2017 IT057-18-7252 UIDB/04292/2020Unmanned aerial systems (UASs) have recently been proven to be valuable remote sensing tools for detecting marine macro litter (MML), with the potential of supporting pollution monitoring programs on coasts. Very low altitude images, acquired with a low-cost RGB camera onboard a UAS on a sandy beach, were used to characterize the abundance of stranded macro litter. We developed an object-oriented classification strategy for automatically identifying the marine macro litter items on a UAS-based orthomosaic. A comparison is presented among three automated object-oriented machine learning (OOML) techniques, namely random forest (RF), support vector machine (SVM), and k-nearest neighbor (KNN). Overall, the detection was satisfactory for the three techniques, with mean F-scores of 65% for KNN, 68% for SVM, and 72% for RF. A comparison with manual detection showed that the RF technique was the most accurate OOML macro litter detector, as it returned the best overall detection quality (F-score) with the lowest number of false positives. Because the number of tuning parameters varied among the three automated machine learning techniques and considering that the three generated abundance maps correlated similarly with the abundance map produced manually, the simplest KNN classifier was preferred to the more complex RF. This work contributes to advances in remote sensing marine litter surveys on coasts, optimizing the automated detection on UAS-derived orthomosaics. MML abundance maps, produced by UAS surveys, assist coastal managers and authorities through environmental pollution monitoring programs. In addition, they contribute to search and evaluation of the mitigation measures and improve clean-up operations on coastal environments.publishersversionpublishe

    Impact of image segmentation on high-content screening data quality for SK-BR-3 cells

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>High content screening (HCS) is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells.</p> <p>Results</p> <p>Cases of poor cell body segmentation occurred frequently for the SK-BR-3 cell line. We trained classifiers to identify SK-BR-3 cells that were well segmented. On an independent test set created by human review of cell images, our optimal support-vector machine classifier identified well-segmented cells with 81% accuracy. The dose responses of morphological features were measurably different in well- and poorly-segmented populations. Elimination of the poorly-segmented cell population increased the purity of DNA content distributions, while appropriately retaining biological heterogeneity, and simultaneously increasing our ability to resolve specific morphological changes in perturbed cells.</p> <p>Conclusion</p> <p>Image segmentation has a measurable impact on HCS data. The application of a multivariate shape-based filter to identify well-segmented cells improved HCS data quality for an HCS-unfriendly cell line, and could be a valuable post-processing step for some HCS datasets.</p

    Accurate detection of dysmorphic nuclei using dynamic programming and supervised classification

    Get PDF
    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows
    corecore