1,733,523 research outputs found
Fundamental Tradeoffs in Learning with Prior Information
We seek to understand fundamental tradeoffs between the accuracy of prior
information that a learner has on a given problem and its learning performance.
We introduce the notion of prioritized risk, which differs from traditional
notions of minimax and Bayes risk by allowing us to study such fundamental
tradeoffs in settings where reality does not necessarily conform to the
learner's prior. We present a general reduction-based approach for extending
classical minimax lower-bound techniques in order to lower bound the
prioritized risk for statistical estimation problems. We also introduce a novel
generalization of Fano's inequality (which may be of independent interest) for
lower bounding the prioritized risk in more general settings involving
unbounded losses. We illustrate the ability of our framework to provide
insights into tradeoffs between prior information and learning performance for
problems in estimation, regression, and reinforcement learning.Comment: Proceedings of the 40th International Conference on Machine Learning,
Honolulu, Hawaii, USA. PMLR 202, 202
Personalized Federated Learning with Hidden Information on Personalized Prior
Federated learning (FL for simplification) is a distributed machine learning
technique that utilizes global servers and collaborative clients to achieve
privacy-preserving global model training without direct data sharing. However,
heterogeneous data problem, as one of FL's main problems, makes it difficult
for the global model to perform effectively on each client's local data. Thus,
personalized federated learning (PFL for simplification) aims to improve the
performance of the model on local data as much as possible. Bayesian learning,
where the parameters of the model are seen as random variables with a prior
assumption, is a feasible solution to the heterogeneous data problem due to the
tendency that the more local data the model use, the more it focuses on the
local data, otherwise focuses on the prior. When Bayesian learning is applied
to PFL, the global model provides global knowledge as a prior to the local
training process. In this paper, we employ Bayesian learning to model PFL by
assuming a prior in the scaled exponential family, and therefore propose
pFedBreD, a framework to solve the problem we model using Bregman divergence
regularization. Empirically, our experiments show that, under the prior
assumption of the spherical Gaussian and the first order strategy of mean
selection, our proposal significantly outcompetes other PFL algorithms on
multiple public benchmarks.Comment: 19 pages, 6 figures, 3 table
MR image reconstruction using deep density priors
Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled
measurements exploit prior information to compensate for missing k-space data.
Deep learning (DL) provides a powerful framework for extracting such
information from existing image datasets, through learning, and then using it
for reconstruction. Leveraging this, recent methods employed DL to learn
mappings from undersampled to fully sampled images using paired datasets,
including undersampled and corresponding fully sampled images, integrating
prior knowledge implicitly. In this article, we propose an alternative approach
that learns the probability distribution of fully sampled MR images using
unsupervised DL, specifically Variational Autoencoders (VAE), and use this as
an explicit prior term in reconstruction, completely decoupling the encoding
operation from the prior. The resulting reconstruction algorithm enjoys a
powerful image prior to compensate for missing k-space data without requiring
paired datasets for training nor being prone to associated sensitivities, such
as deviations in undersampling patterns used in training and test time or coil
settings. We evaluated the proposed method with T1 weighted images from a
publicly available dataset, multi-coil complex images acquired from healthy
volunteers (N=8) and images with white matter lesions. The proposed algorithm,
using the VAE prior, produced visually high quality reconstructions and
achieved low RMSE values, outperforming most of the alternative methods on the
same dataset. On multi-coil complex data, the algorithm yielded accurate
magnitude and phase reconstruction results. In the experiments on images with
white matter lesions, the method faithfully reconstructed the lesions.
Keywords: Reconstruction, MRI, prior probability, machine learning, deep
learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages
tota
Signaling Without Common Prior: An Experiment
The common prior assumption is pervasive in game-theoretic models with incomplete information. This paper investigates experimentally the importance of inducing a common prior in a two-person signaling game. For a specific probability distribution of the sender’s type, the long-run behavior without an induced common prior is shown to be different from the behavior when a common prior is induced, while for other distributions behavior is similar under both regimes. We also present a learning model that allows players to learn about the other players’ strategies and the prior distribution of the sender’s type. We show that this learning model accurately accounts for all main features of the data.common prior;signaling;experiment;learning
Learning Using Privileged Information: SVM+ and Weighted SVM
Prior knowledge can be used to improve predictive performance of learning
algorithms or reduce the amount of data required for training. The same goal is
pursued within the learning using privileged information paradigm which was
recently introduced by Vapnik et al. and is aimed at utilizing additional
information available only at training time -- a framework implemented by SVM+.
We relate the privileged information to importance weighting and show that the
prior knowledge expressible with privileged features can also be encoded by
weights associated with every training example. We show that a weighted SVM can
always replicate an SVM+ solution, while the converse is not true and we
construct a counterexample highlighting the limitations of SVM+. Finally, we
touch on the problem of choosing weights for weighted SVMs when privileged
features are not available.Comment: 18 pages, 8 figures; integrated reviewer comments, improved
typesettin
- …