Wider adoption of neural networks in many critical domains such as finance
and healthcare is being hindered by the need to explain their predictions and
to impose additional constraints on them. Monotonicity constraint is one of the
most requested properties in real-world scenarios and is the focus of this
paper. One of the oldest ways to construct a monotonic fully connected neural
network is to constrain signs on its weights. Unfortunately, this construction
does not work with popular non-saturated activation functions as it can only
approximate convex functions. We show this shortcoming can be fixed by
constructing two additional activation functions from a typical unsaturated
monotonic activation function and employing each of them on the part of
neurons. Our experiments show this approach of building monotonic neural
networks has better accuracy when compared to other state-of-the-art methods,
while being the simplest one in the sense of having the least number of
parameters, and not requiring any modifications to the learning procedure or
post-learning steps. Finally, we prove it can approximate any continuous
monotone function on a compact subset of Rn