1,429 research outputs found

    Epistemic Uncertainty Quantification in Scientific Models

    Get PDF
    In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648-4663], where a framework for numerical treatment of epistemic uncertainty was proposed. In this paper, firstly, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in Lp norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach. And then, we apply this methods to work with one of the more restrictive uncertainty models, i.e., the fuzzy logic, where the p-distance, the weighted expected value and variance are defined to assess the accuracy of the solutions. At last, we give a brief introduction to our future work, which is epistemic uncertainty quantification using evidence theory

    Semantic Image Segmentation via Deep Parsing Network

    Full text link
    This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-to-end computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy.Comment: To appear in International Conference on Computer Vision (ICCV) 201

    Forgettable Federated Linear Learning with Certified Data Removal

    Full text link
    Federated learning (FL) is a trending distributed learning framework that enables collaborative model training without data sharing. Machine learning models trained on datasets can potentially expose the private information of the training data, revealing details about individual data records. In this study, we focus on the FL paradigm that grants clients the ``right to be forgotten''. The forgettable FL framework should bleach its global model weights as it has never seen that client and hence does not reveal any information about the client. To this end, we propose the Forgettable Federated Linear Learning (2F2L) framework featured with novel training and data removal strategies. The training pipeline, named Federated linear training, employs linear approximation on the model parameter space to enable our 2F2L framework work for deep neural networks while achieving comparable results with canonical neural network training. We also introduce FedRemoval, an efficient and effective removal strategy that tackles the computational challenges in FL by approximating the Hessian matrix using public server data from the pretrained model. Unlike the previous uncertified and heuristic machine unlearning methods in FL, we provide theoretical guarantees by bounding the differences of model weights by our FedRemoval and that from retraining from scratch. Experimental results on MNIST and Fashion-MNIST datasets demonstrate the effectiveness of our method in achieving a balance between model accuracy and information removal, outperforming baseline strategies and approaching retraining from scratch
    • …
    corecore