The proliferation of Deep Neural Networks has resulted in machine learning
systems becoming increasingly more present in various real-world applications.
Consequently, there is a growing demand for highly reliable models in these
domains, making the problem of uncertainty calibration pivotal, when
considering the future of deep learning. This is especially true when
considering object detection systems, that are commonly present in
safety-critical application such as autonomous driving and robotics. For this
reason, this work presents a novel theoretical and practical framework to
evaluate object detection systems in the context of uncertainty calibration.
The robustness of the proposed uncertainty calibration metrics is shown through
a series of representative experiments. Code for the proposed uncertainty
calibration metrics at:
https://github.com/pedrormconde/Uncertainty_Calibration_Object_Detection.Comment: Pre-prin