305,481 research outputs found
Generative Supervised Classification Using Dirichlet Process Priors.
Choosing the appropriate parameter prior distributions associated to a given Bayesian model is a challenging problem. Conjugate priors can be selected for simplicity motivations. However, conjugate priors can be too restrictive to accurately model the available prior information. This paper studies a new generative supervised classifier which assumes that the parameter prior distributions conditioned on each class are mixtures of Dirichlet processes. The motivations for using mixtures of Dirichlet processes is their known ability to model accurately a large class of probability distributions. A Monte Carlo method allowing one to sample according to the resulting class-conditional posterior distributions is then studied. The parameters appearing in the class-conditional densities can then be estimated using these generated samples (following Bayesian learning). The proposed supervised classifier is applied to the classification of altimetric waveforms backscattered from different surfaces (oceans, ices, forests, and deserts). This classification is a first step before developing tools allowing for the extraction of useful geophysical information from altimetric waveforms backscattered from nonoceanic surfaces
Investment in a Monopoly with Bayesian Learning
We study how learning affects an uninformed monopolist's supply and investment decisions under multiplicative uncertainty in demand. The monopolist is uninformed because it does not know one of the parameters defining the distribution of the random demand. Observing prices reveals this information slowly. We first show how to incorporate Bayesian learning into dynamic programming by focusing on sufficient statistics and conjugate families of distributions. We show their necessity in dynamic programming to be able to solve dynamic programs either analytically or numerically. This is important since it is not true that a solution to the infinite-horizon program can be found either analytically or numerically for any kinds of distributions. We then use specific distributions to study the monopolist's behavior. Specifically, we rely on the fact that the family of normal distributions with an unknown mean is a conjugate family for samples from a normal distribution to obtain closed-form solutions for the optimal supply and investment decisions. This enables us to study the effect of learning on supply and investment decisions, as well as the steady state level of capital. Our findings are as follows. Learning affects the monopolist's behavior. The higher the expected mean of the demand shock given its beliefs, the higher the supply and the lower the investment. Although learning does not affect the steady state level of capital since the uninformed monopolist becomes informed in the limit, it reduces the speed of convergence to the steady state.
Deep Adaptive Feature Embedding with Local Sample Distributions for Person Re-identification
Person re-identification (re-id) aims to match pedestrians observed by
disjoint camera views. It attracts increasing attention in computer vision due
to its importance to surveillance system. To combat the major challenge of
cross-view visual variations, deep embedding approaches are proposed by
learning a compact feature space from images such that the Euclidean distances
correspond to their cross-view similarity metric. However, the global Euclidean
distance cannot faithfully characterize the ideal similarity in a complex
visual feature space because features of pedestrian images exhibit unknown
distributions due to large variations in poses, illumination and occlusion.
Moreover, intra-personal training samples within a local range are robust to
guide deep embedding against uncontrolled variations, which however, cannot be
captured by a global Euclidean distance. In this paper, we study the problem of
person re-id by proposing a novel sampling to mine suitable \textit{positives}
(i.e. intra-class) within a local range to improve the deep embedding in the
context of large intra-class variations. Our method is capable of learning a
deep similarity metric adaptive to local sample structure by minimizing each
sample's local distances while propagating through the relationship between
samples to attain the whole intra-class minimization. To this end, a novel
objective function is proposed to jointly optimize similarity metric learning,
local positive mining and robust deep embedding. This yields local
discriminations by selecting local-ranged positive samples, and the learned
features are robust to dramatic intra-class variations. Experiments on
benchmarks show state-of-the-art results achieved by our method.Comment: Published on Pattern Recognitio
FRAug: Tackling Federated Learning with Non-IID Features via Representation Augmentation
Federated Learning (FL) is a decentralized learning paradigm, in which
multiple clients collaboratively train deep learning models without
centralizing their local data, and hence preserve data privacy. Real-world
applications usually involve a distribution shift across the datasets of the
different clients, which hurts the generalization ability of the clients to
unseen samples from their respective data distributions. In this work, we
address the recently proposed feature shift problem where the clients have
different feature distributions, while the label distribution is the same. We
propose Federated Representation Augmentation (FRAug) to tackle this practical
and challenging problem. Our approach generates synthetic client-specific
samples in the embedding space to augment the usually small client datasets.
For that, we train a shared generative model to fuse the clients knowledge
learned from their different feature distributions. This generator synthesizes
client-agnostic embeddings, which are then locally transformed into
client-specific embeddings by Representation Transformation Networks (RTNets).
By transferring knowledge across the clients, the generated embeddings act as a
regularizer for the client models and reduce overfitting to the local original
datasets, hence improving generalization. Our empirical evaluation on public
benchmarks and a real-world medical dataset demonstrates the effectiveness of
the proposed method, which substantially outperforms the current
state-of-the-art FL methods for non-IID features, including PartialFed and
FedBN.Comment: ICCV 202
- …