284,806 research outputs found
Regularizing soft decision trees
Recently, we have proposed a new decision tree family called soft decision trees where a node chooses both its left and right children with different probabilities as given by a gating function, different from a hard decision node which chooses one of the two. In this paper, we extend the original algorithm by introducing local dimension reduction via L-1 and L-2 regularization for feature selection and smoother fitting. We compare our novel approach with the standard decision tree algorithms over 27 classification data sets. We see that both regularized versions have similar generalization ability with less complexity in terms of number of nodes, where L-2 seems to work slightly better than L-1.Publisher's VersionAuthor Post Prin
Heuristic Tree Search for Detection and Decoding of Uncoded and Linear Block Coded Communication Systems
A heuristic tree search algorithm is developed for
the maximum likelihood detection and decoding problem in
a general communication system. We propose several "cheap"
heuristic functions using constrained linear detectors and the minimum mean square errors (MMSE) detector. Even though
the MMSE heuristic function does not guarantee the optimal
solution, it has a negligible performance loss and provides a good complexity-performance tradeoff. For linear block coded systems, heuristic tree search is modified for soft decision decoding. High rate codes are decoded via the minimum state trellis, and low rate codes via the minimum complexity tree. Preprocessing is also discussed to further speed up the algorithms
MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via Mixing Recurrent Soft Decision Trees
While achieving tremendous success in various fields, existing multi-agent
reinforcement learning (MARL) with a black-box neural network architecture
makes decisions in an opaque manner that hinders humans from understanding the
learned knowledge and how input observations influence decisions. Instead,
existing interpretable approaches, such as traditional linear models and
decision trees, usually suffer from weak expressivity and low accuracy. To
address this apparent dichotomy between performance and interpretability, our
solution, MIXing Recurrent soft decision Trees (MIXRTs), is a novel
interpretable architecture that can represent explicit decision processes via
the root-to-leaf path and reflect each agent's contribution to the team.
Specifically, we construct a novel soft decision tree to address partial
observability by leveraging the advances in recurrent neural networks, and
demonstrate which features influence the decision-making process through the
tree-based model. Then, based on the value decomposition framework, we linearly
assign credit to each agent by explicitly mixing individual action values to
estimate the joint action value using only local observations, providing new
insights into how agents cooperate to accomplish the task. Theoretical analysis
shows that MIXRTs guarantees the structural constraint on additivity and
monotonicity in the factorization of joint action values. Evaluations on the
challenging Spread and StarCraft II tasks show that MIXRTs achieves competitive
performance compared to widely investigated methods and delivers more
straightforward explanations of the decision processes. We explore a promising
path toward developing learning algorithms with both high performance and
interpretability, potentially shedding light on new interpretable paradigms for
MARL
- …