Despite the success of physics-informed neural networks (PINNs) in
approximating partial differential equations (PDEs), it is known that PINNs can
sometimes fail to converge to the correct solution in problems involving
complicated PDEs. This is reflected in several recent studies on characterizing
and mitigating the ``failure modes'' of PINNs. While most of these studies have
focused on balancing loss functions or adaptively tuning PDE coefficients, what
is missing is a thorough understanding of the connection between failure modes
of PINNs and sampling strategies used for training PINNs. In this paper, we
provide a novel perspective of failure modes of PINNs by hypothesizing that the
training of PINNs rely on successful ``propagation'' of solution from initial
and/or boundary condition points to interior points. We show that PINNs with
poor sampling strategies can get stuck at trivial solutions if there are
propagation failures. We additionally demonstrate that propagation failures are
characterized by highly imbalanced PDE residual fields where very high
residuals are observed over very narrow regions. To mitigate propagation
failures, we propose a novel evolutionary sampling (Evo) method that can
incrementally accumulate collocation points in regions of high PDE residuals
with little to no computational overhead. We provide an extension of Evo to
respect the principle of causality while solving time-dependent PDEs. We
theoretically analyze the behavior of Evo and empirically demonstrate its
efficacy and efficiency in comparison with baselines on a variety of PDE
problems.Comment: 34 pages, 46 figures, 2 table