1,614 research outputs found
A review of domain adaptation without target labels
Domain adaptation has become a prominent problem setting in machine learning
and related fields. This review asks the question: how can a classifier learn
from a source domain and generalize to a target domain? We present a
categorization of approaches, divided into, what we refer to as, sample-based,
feature-based and inference-based methods. Sample-based methods focus on
weighting individual observations during training based on their importance to
the target domain. Feature-based methods revolve around on mapping, projecting
and representing features such that a source classifier performs well on the
target domain and inference-based methods incorporate adaptation into the
parameter estimation procedure, for instance through constraints on the
optimization procedure. Additionally, we review a number of conditions that
allow for formulating bounds on the cross-domain generalization error. Our
categorization highlights recurring ideas and raises questions important to
further research.Comment: 20 pages, 5 figure
Bayesian inference for optimal dynamic treatment regimes in practice
In this work, we examine recently developed methods for Bayesian inference of
optimal dynamic treatment regimes (DTRs). DTRs are a set of treatment decision
rules aimed at tailoring patient care to patient-specific characteristics,
thereby falling within the realm of precision medicine. In this field,
researchers seek to tailor therapy with the intention of improving health
outcomes; therefore, they are most interested in identifying optimal DTRs.
Recent work has developed Bayesian methods for identifying optimal DTRs in a
family indexed by via Bayesian dynamic marginal structural models (MSMs)
(Rodriguez Duque et al., 2022a); we review the proposed estimation procedure
and illustrate its use via the new BayesDTR R package. Although methods in
(Rodriguez Duque et al., 2022a) can estimate optimal DTRs well, they may lead
to biased estimators when the model for the expected outcome if everyone in a
population were to follow a given treatment strategy, known as a value
function, is misspecified or when a grid search for the optimum is employed. We
describe recent work that uses a Gaussian process () prior on the value
function as a means to robustly identify optimal DTRs (Rodriguez Duque et al.,
2022b). We demonstrate how a approach may be implemented with the BayesDTR
package and contrast it with other value-search approaches to identifying
optimal DTRs. We use data from an HIV therapeutic trial in order to illustrate
a standard analysis with these methods, using both the original observed trial
data and an additional simulated component to showcase a longitudinal
(two-stage DTR) analysis
Implementing Loss Distribution Approach for Operational Risk
To quantify the operational risk capital charge under the current regulatory
framework for banking supervision, referred to as Basel II, many banks adopt
the Loss Distribution Approach. There are many modeling issues that should be
resolved to use the approach in practice. In this paper we review the
quantitative methods suggested in literature for implementation of the
approach. In particular, the use of the Bayesian inference method that allows
to take expert judgement and parameter uncertainty into account, modeling
dependence and inclusion of insurance are discussed
A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
We present a tutorial on Bayesian optimization, a method of finding the
maximum of expensive cost functions. Bayesian optimization employs the Bayesian
technique of setting a prior over the objective function and combining it with
evidence to get a posterior function. This permits a utility-based selection of
the next observation to make on the objective function, which must take into
account both exploration (sampling from areas of high uncertainty) and
exploitation (sampling areas likely to offer improvement over the current best
observation). We also present two detailed extensions of Bayesian optimization,
with experiments---active user modelling with preferences, and hierarchical
reinforcement learning---and a discussion of the pros and cons of Bayesian
optimization based on our experiences
- …