We study a social learning model in which agents iteratively update their
beliefs about the true state of the world using private signals and the beliefs
of other agents in a non-Bayesian manner. Some agents are stubborn, meaning
they attempt to convince others of an erroneous true state (modeling fake
news). We show that while agents learn the true state on short timescales, they
"forget" it and believe the erroneous state to be true on longer timescales.
Using these results, we devise strategies for seeding stubborn agents so as to
disrupt learning, which outperform intuitive heuristics and give novel insights
regarding vulnerabilities in social learning