Physics-Informed Neural Networks (PINNs) are Neural Network architectures
trained to emulate solutions of differential equations without the necessity of
solution data. They are currently ubiquitous in the scientific literature due
to their flexible and promising settings. However, very little of the available
research provides practical studies that aim for a better quantitative
understanding of such architecture and its functioning. In this paper, we
analyze the performance of PINNs for various architectural hyperparameters and
algorithmic settings based on a novel error metric and other factors such as
training time. The proposed metric and approach are tailored to evaluate how
well a PINN generalizes to points outside its training domain. Besides, we
investigate the effect of the algorithmic setup on the outcome prediction of a
PINN, inside and outside its training domain, to explore the effect of each
hyperparameter. Through our study, we assess how the algorithmic setup of PINNs
influences their potential for generalization and deduce the settings which
maximize the potential of a PINN for accurate generalization. The study that we
present returns insightful and at times counterintuitive results on PINNs.
These results can be useful in PINN applications when defining the model and
evaluating it