Current state-of-the-art autonomous driving vehicles mainly rely on each
individual sensor system to perform perception tasks. Such a framework's
reliability could be limited by occlusion or sensor failure. To address this
issue, more recent research proposes using vehicle-to-vehicle (V2V)
communication to share perception information with others. However, most
relevant works focus only on cooperative detection and leave cooperative
tracking an underexplored research field. A few recent datasets, such as
V2V4Real, provide 3D multi-object cooperative tracking benchmarks. However,
their proposed methods mainly use cooperative detection results as input to a
standard single-sensor Kalman Filter-based tracking algorithm. In their
approach, the measurement uncertainty of different sensors from different
connected autonomous vehicles (CAVs) may not be properly estimated to utilize
the theoretical optimality property of Kalman Filter-based tracking algorithms.
In this paper, we propose a novel 3D multi-object cooperative tracking
algorithm for autonomous driving via a differentiable multi-sensor Kalman
Filter. Our algorithm learns to estimate measurement uncertainty for each
detection that can better utilize the theoretical property of Kalman
Filter-based tracking methods. The experiment results show that our algorithm
improves the tracking accuracy by 17% with only 0.037x communication costs
compared with the state-of-the-art method in V2V4Real