8 research outputs found

    Lidar–camera semi-supervised learning for semantic segmentation

    Get PDF
    In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations

    Utilisation de l'auto-apprentissage pour réduire le coût d'annotation pour la segmentation d'image en pathology digitale

    Get PDF
    peer reviewedData scarcity is a common issue when training deep learning models for digital pathology, as large exhaustively-annotated image datasets are difficult to obtain. In this paper, we propose a self-training based approach that can exploit both (few) exhaustively annotated images and (very) sparsely-annotated images to improve the training of deep learning models for image segmentation tasks. The approach is evaluated on three public and one in-house datasets, representing a diverse set of segmentation tasks in digital pathology. The experimental results show that self-training allows to bring significant model improvement by incorporating sparsely annotated images and proves to be a good strategy to relieve labeling effort in the digital pathology domain

    Semantic segmentation of explosive volcanic plumes through deep learning

    Get PDF
    Tracking explosive volcanic phenomena can provide important information for hazard monitoring and volcano research. Perhaps the simplest forms of monitoring instruments are visible-wavelength cameras, which are routinely deployed on volcanoes around the globe. Here, we present the development of deep learning models, based on convolutional neural networks (CNNs), to perform semantic segmentation of explosive volcanic plumes on visible imagery, therefore classifying each pixel of an image as either explosive plume or not explosive plume. We have developed 3 models, each with average validation accuracies of >97% under 10-fold cross-validation; although we do highlight that, due to the limited training and validation dataset, this value is likely an overestimate of real-world performance. We then present model deployment for automated retrieval of plume height, rise speed and propagation direction, all parameters which can have great utility particularly in ash dispersion modelling and associated aviation hazard identification. The 3 trained models are freely available for download at https://doi.org/10.15131/shef.data.17061509
    corecore