In recent years, there has been increasing interest in explanation methods
for neural model predictions that offer precise formal guarantees. These
include abductive (respectively, contrastive) methods, which aim to compute
minimal subsets of input features that are sufficient for a given prediction to
hold (respectively, to change a given prediction). The corresponding decision
problems are, however, known to be intractable. In this paper, we investigate
whether tractability can be regained by focusing on neural models implementing
a monotonic function. Although the relevant decision problems remain
intractable, we can show that they become solvable in polynomial time by means
of greedy algorithms if we additionally assume that the activation functions
are continuous everywhere and differentiable almost everywhere. Our experiments
suggest favourable performance of our algorithms