3 research outputs found
Graph Value Iteration
In recent years, deep Reinforcement Learning (RL) has been successful in
various combinatorial search domains, such as two-player games and scientific
discovery. However, directly applying deep RL in planning domains is still
challenging. One major difficulty is that without a human-crafted heuristic
function, reward signals remain zero unless the learning framework discovers
any solution plan. Search space becomes \emph{exponentially larger} as the
minimum length of plans grows, which is a serious limitation for planning
instances with a minimum plan length of hundreds to thousands of steps.
Previous learning frameworks that augment graph search with deep neural
networks and extra generated subgoals have achieved success in various
challenging planning domains. However, generating useful subgoals requires
extensive domain knowledge. We propose a domain-independent method that
augments graph search with graph value iteration to solve hard planning
instances that are out of reach for domain-specialized solvers. In particular,
instead of receiving learning signals only from discovered plans, our approach
also learns from failed search attempts where no goal state has been reached.
The graph value iteration component can exploit the graph structure of local
search space and provide more informative learning signals. We also show how we
use a curriculum strategy to smooth the learning process and perform a full
analysis of how graph value iteration scales and enables learning
A new perspective on building efficient and expressive 3D equivariant graph neural networks
Geometric deep learning enables the encoding of physical symmetries in
modeling 3D objects. Despite rapid progress in encoding 3D symmetries into
Graph Neural Networks (GNNs), a comprehensive evaluation of the expressiveness
of these networks through a local-to-global analysis lacks today. In this
paper, we propose a local hierarchy of 3D isomorphism to evaluate the
expressive power of equivariant GNNs and investigate the process of
representing global geometric information from local patches. Our work leads to
two crucial modules for designing expressive and efficient geometric GNNs;
namely local substructure encoding (LSE) and frame transition encoding (FTE).
To demonstrate the applicability of our theory, we propose LEFTNet which
effectively implements these modules and achieves state-of-the-art performance
on both scalar-valued and vector-valued molecular property prediction tasks. We
further point out the design space for future developments of equivariant graph
neural networks. Our codes are available at
\url{https://github.com/yuanqidu/LeftNet}