With a strong alignment between the training and test distributions, object
relation as a context prior facilitates object detection. Yet, it turns into a
harmful but inevitable training set bias upon test distributions that shift
differently across space and time. Nevertheless, the existing detectors cannot
incorporate deployment context prior during the test phase without parameter
update. Such kind of capability requires the model to explicitly learn
disentangled representations with respect to context prior. To achieve this, we
introduce an additional graph input to the detector, where the graph represents
the deployment context prior, and its edge values represent object relations.
Then, the detector behavior is trained to bound to the graph with a modified
training objective. As a result, during the test phase, any suitable deployment
context prior can be injected into the detector via graph edits, hence
calibrating, or "re-biasing" the detector towards the given prior at run-time
without parameter update. Even if the deployment prior is unknown, the detector
can self-calibrate using deployment prior approximated using its own
predictions. Comprehensive experimental results on the COCO dataset, as well as
cross-dataset testing on the Objects365 dataset, demonstrate the effectiveness
of the run-time calibratable detector