7 research outputs found
SMART: A Situation Model for Algebra Story Problems via Attributed Grammar
Solving algebra story problems remains a challenging task in artificial
intelligence, which requires a detailed understanding of real-world situations
and a strong mathematical reasoning capability. Previous neural solvers of math
word problems directly translate problem texts into equations, lacking an
explicit interpretation of the situations, and often fail to handle more
sophisticated situations. To address such limits of neural solvers, we
introduce the concept of a \emph{situation model}, which originates from
psychology studies to represent the mental states of humans in problem-solving,
and propose \emph{SMART}, which adopts attributed grammar as the representation
of situation models for algebra story problems. Specifically, we first train an
information extraction module to extract nodes, attributes, and relations from
problem texts and then generate a parse graph based on a pre-defined attributed
grammar. An iterative learning strategy is also proposed to improve the
performance of SMART further. To rigorously study this task, we carefully
curate a new dataset named \emph{ASP6.6k}. Experimental results on ASP6.6k show
that the proposed model outperforms all previous neural solvers by a large
margin while preserving much better interpretability. To test these models'
generalization capability, we also design an out-of-distribution (OOD)
evaluation, in which problems are more complex than those in the training set.
Our model exceeds state-of-the-art models by 17\% in the OOD evaluation,
demonstrating its superior generalization ability
Solving Math Word Problems by Scoring Equations with Recursive Neural Networks
Solving math word problems is a cornerstone task in assessing language
understanding and reasoning capabilities in NLP systems. Recent works use
automatic extraction and ranking of candidate solution equations providing the
answer to math word problems. In this work, we explore novel approaches to
score such candidate solution equations using tree-structured recursive neural
network (Tree-RNN) configurations. The advantage of this Tree-RNN approach over
using more established sequential representations, is that it can naturally
capture the structure of the equations. Our proposed method consists in
transforming the mathematical expression of the equation into an expression
tree. Further, we encode this tree into a Tree-RNN by using different Tree-LSTM
architectures. Experimental results show that our proposed method (i) improves
overall performance with more than 3% accuracy points compared to previous
state-of-the-art, and with over 18% points on a subset of problems that require
more complex reasoning, and (ii) outperforms sequential LSTMs by 4% accuracy
points on such more complex problems