5 research outputs found
Predictions with dynamic Bayesian predictive synthesis are exact minimax
We analyze the combination of multiple predictive distributions for time
series data when all forecasts are misspecified. We show that a specific
dynamic form of Bayesian predictive synthesis -- a general and coherent
Bayesian framework for ensemble methods -- produces exact minimax predictive
densities with regard to Kullback-Leibler loss, providing theoretical support
for finite sample predictive performance over existing ensemble methods. A
simulation study that highlights this theoretical result is presented, showing
that dynamic Bayesian predictive synthesis is superior to other ensemble
methods using multiple metrics
Denoising Cosine Similarity: A Theory-Driven Approach for Efficient Representation Learning
Representation learning has been increasing its impact on the research and
practice of machine learning, since it enables to learn representations that
can apply to various downstream tasks efficiently. However, recent works pay
little attention to the fact that real-world datasets used during the stage of
representation learning are commonly contaminated by noise, which can degrade
the quality of learned representations. This paper tackles the problem to learn
robust representations against noise in a raw dataset. To this end, inspired by
recent works on denoising and the success of the cosine-similarity-based
objective functions in representation learning, we propose the denoising
Cosine-Similarity (dCS) loss. The dCS loss is a modified cosine-similarity loss
and incorporates a denoising property, which is supported by both our
theoretical and empirical findings. To make the dCS loss implementable, we also
construct the estimators of the dCS loss with statistical guarantees. Finally,
we empirically show the efficiency of the dCS loss over the baseline objective
functions in vision and speech domains