2,044 research outputs found
ASBART:Accelerated Soft Bayes Additive Regression Trees
Bayes additive regression trees(BART) is a nonparametric regression model
which has gained wide-spread popularity in recent years due to its flexibility
and high accuracy of estimation. Soft BART,one variation of BART,improves both
practically and heoretically on existing Bayesian sum-of-trees models. One
bottleneck for Soft BART is its slow speed in the long MCMC loop. Compared to
BART,it use more than about 20 times to complete the calculation with the
default setting. We proposed a variant of BART named accelerate Soft
BART(ASBART). Simulation studies show that the new method is about 10 times
faster than the Soft BART with comparable accuracy. Our code is open-source and
available at https://github.com/richael008/XSBART
Evolutionary Reinforcement Learning: A Survey
Reinforcement learning (RL) is a machine learning approach that trains agents
to maximize cumulative rewards through interactions with environments. The
integration of RL with deep learning has recently resulted in impressive
achievements in a wide range of challenging tasks, including board games,
arcade games, and robot control. Despite these successes, there remain several
crucial challenges, including brittle convergence properties caused by
sensitive hyperparameters, difficulties in temporal credit assignment with long
time horizons and sparse rewards, a lack of diverse exploration, especially in
continuous search space scenarios, difficulties in credit assignment in
multi-agent reinforcement learning, and conflicting objectives for rewards.
Evolutionary computation (EC), which maintains a population of learning agents,
has demonstrated promising performance in addressing these limitations. This
article presents a comprehensive survey of state-of-the-art methods for
integrating EC into RL, referred to as evolutionary reinforcement learning
(EvoRL). We categorize EvoRL methods according to key research fields in RL,
including hyperparameter optimization, policy search, exploration, reward
shaping, meta-RL, and multi-objective RL. We then discuss future research
directions in terms of efficient methods, benchmarks, and scalable platforms.
This survey serves as a resource for researchers and practitioners interested
in the field of EvoRL, highlighting the important challenges and opportunities
for future research. With the help of this survey, researchers and
practitioners can develop more efficient methods and tailored benchmarks for
EvoRL, further advancing this promising cross-disciplinary research field
- …