43 research outputs found

    Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties

    Full text link
    Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.Comment: main text 10 pages + Supplementary materials 6 page

    ????????? ?????? ???????????? ?????? ???????????? ?????? ??????????????? ???????????? ????????? ???????????? ??????

    Get PDF
    Department of Computer Science and EngineeringAs deep learning has grown fast, so did the desire to interpret deep learning black boxes. As a result, many analysis tools have emerged to interpret it. Interpretation in deep learning has in fact popularized the use of deep learning in many areas including research, manufacturing, finance, and healthcare which needs relatively accurate and reliable decision making process. However, there is something we should not overlook. It is uncertainty. Uncertainties of models are directly reflected in the results of interpretations of model decision as explaining tools are dependent to models. Therefore, uncertainties of interpreting output from deep learning models should be also taken into account as quality and cost are directly impacted by measurement uncertainty. This attempt has not been made yet. Therefore, we suggest Bayesian input attribution rather than discrete input attribution by approximating Bayesian inference in deep Gaussian process through dropout to input attribution in this paper. Then we extract candidates that can sufficiently affect the output of the model, taking into account both input attribution itself and uncertainty of it.clos

    On the Limitations of Model Stealing with Uncertainty Quantification Models

    Full text link
    Model stealing aims at inferring a victim model's functionality at a fraction of the original training cost. While the goal is clear, in practice the model's architecture, weight dimension, and original training data can not be determined exactly, leading to mutual uncertainty during stealing. In this work, we explicitly tackle this uncertainty by generating multiple possible networks and combining their predictions to improve the quality of the stolen model. For this, we compare five popular uncertainty quantification models in a model stealing task. Surprisingly, our results indicate that the considered models only lead to marginal improvements in terms of label agreement (i.e., fidelity) to the stolen model. To find the cause of this, we inspect the diversity of the model's prediction by looking at the prediction variance as a function of training iterations. We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.Comment: 6 pages, 1 figure, 2 table, paper submitted to European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learnin

    Active learning for reducing labeling effort in text classification tasks

    Get PDF
    Labeling data can be an expensive task as it is usually performed manually by domain experts. This is cumbersome for deep learning, as it is dependent on large labeled datasets. Active learning (AL) is a paradigm that aims to reduce labeling effort by only using the data which the used model deems most informative. Little research has been done on AL in a text classification setting and next to none has involved the more recent, state-of-the-art Natural Language Processing (NLP) models. Here, we present an empirical study that compares different uncertainty-based algorithms with BERTbase_{base} as the used classifier. We evaluate the algorithms on two NLP classification datasets: Stanford Sentiment Treebank and KvK-Frontpages. Additionally, we explore heuristics that aim to solve presupposed problems of uncertainty-based AL; namely, that it is unscalable and that it is prone to selecting outliers. Furthermore, we explore the influence of the query-pool size on the performance of AL. Whereas it was found that the proposed heuristics for AL did not improve performance of AL; our results show that using uncertainty-based AL with BERTbase_{base} outperforms random sampling of data. This difference in performance can decrease as the query-pool size gets larger.Comment: Accepted as a conference paper at the joint 33rd Benelux Conference on Artificial Intelligence and the 30th Belgian Dutch Conference on Machine Learning (BNAIC/BENELEARN 2021). This camera-ready version submitted to BNAIC/BENELEARN, adds several improvements including a more thorough discussion of related work plus an extended discussion section. 28 pages including references and appendice
    corecore