10,340 research outputs found
Differential evolution with an evolution path: a DEEP evolutionary algorithm
Utilizing cumulative correlation information already existing in an evolutionary process, this paper proposes a predictive approach to the reproduction mechanism of new individuals for differential evolution (DE) algorithms. DE uses a distributed model (DM) to generate new individuals, which is relatively explorative, whilst evolution strategy (ES) uses a centralized model (CM) to generate offspring, which through adaptation retains a convergence momentum. This paper adopts a key feature in the CM of a covariance matrix adaptation ES, the cumulatively learned evolution path (EP), to formulate a new evolutionary algorithm (EA) framework, termed DEEP, standing for DE with an EP. Without mechanistically combining two CM and DM based algorithms together, the DEEP framework offers advantages of both a DM and a CM and hence substantially enhances performance. Under this architecture, a self-adaptation mechanism can be built inherently in a DEEP algorithm, easing the task of predetermining algorithm control parameters. Two DEEP variants are developed and illustrated in the paper. Experiments on the CEC'13 test suites and two practical problems demonstrate that the DEEP algorithms offer promising results, compared with the original DEs and other relevant state-of-the-art EAs
Transfer Learning for Contextual Multi-armed Bandits
Motivated by a range of applications, we study in this paper the problem of
transfer learning for nonparametric contextual multi-armed bandits under the
covariate shift model, where we have data collected on source bandits before
the start of the target bandit learning. The minimax rate of convergence for
the cumulative regret is established and a novel transfer learning algorithm
that attains the minimax regret is proposed. The results quantify the
contribution of the data from the source domains for learning in the target
domain in the context of nonparametric contextual multi-armed bandits.
In view of the general impossibility of adaptation to unknown smoothness, we
develop a data-driven algorithm that achieves near-optimal statistical
guarantees (up to a logarithmic factor) while automatically adapting to the
unknown parameters over a large collection of parameter spaces under an
additional self-similarity assumption. A simulation study is carried out to
illustrate the benefits of utilizing the data from the auxiliary source domains
for learning in the target domain
- …