Formal certification of Neural Networks (NNs) is crucial for ensuring their
safety, fairness, and robustness. Unfortunately, on the one hand, sound and
complete certification algorithms of ReLU-based NNs do not scale to large-scale
NNs. On the other hand, incomplete certification algorithms are easier to
compute, but they result in loose bounds that deteriorate with the depth of NN,
which diminishes their effectiveness. In this paper, we ask the following
question; can we replace the ReLU activation function with one that opens the
door to incomplete certification algorithms that are easy to compute but can
produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class
of NNs with activation functions based on Bernstein polynomials instead of the
commonly used ReLU activation. Bernstein polynomials are smooth and
differentiable functions with desirable properties such as the so-called range
enclosure and subdivision properties. We design a novel algorithm, called
Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our
approach leverages the properties of Bernstein polynomials to improve the
tractability of neural network certification tasks while maintaining the
accuracy of the trained networks. We conduct comprehensive experiments in
adversarial robustness and reachability analysis settings to assess the
effectiveness of the proposed Bernstein polynomial activation in enhancing the
certification process. Our proposed framework achieves high certified accuracy
for adversarially-trained NNs, which is often a challenging task for certifiers
of ReLU-based NNs. Moreover, using Bern-IBP bounds for certified training
results in NNs with state-of-the-art certified accuracy compared to ReLU
networks. This work establishes Bernstein polynomial activation as a promising
alternative for improving NN certification tasks across various applications