Maximizing influences in complex networks is a practically important but
computationally challenging task for social network analysis, due to its NP-
hard nature. Most current approximation or heuristic methods either require
tremendous human design efforts or achieve unsatisfying balances between
effectiveness and efficiency. Recent machine learning attempts only focus on
speed but lack performance enhancement. In this paper, different from previous
attempts, we propose an effective deep reinforcement learning model that
achieves superior performances over traditional best influence maximization
algorithms. Specifically, we design an end-to-end learning framework that
combines graph neural network as the encoder and reinforcement learning as the
decoder, named DREIM. Trough extensive training on small synthetic graphs,
DREIM outperforms the state-of-the-art baseline methods on very large synthetic
and real-world networks on solution quality, and we also empirically show its
linear scalability with regard to the network size, which demonstrates its
superiority in solving this problem