Multispectral pedestrian detection is capable of adapting to insufficient
illumination conditions by leveraging color-thermal modalities. On the other
hand, it is still lacking of in-depth insights on how to fuse the two
modalities effectively. Compared with traditional pedestrian detection, we find
multispectral pedestrian detection suffers from modality imbalance problems
which will hinder the optimization process of dual-modality network and depress
the performance of detector. Inspired by this observation, we propose Modality
Balance Network (MBNet) which facilitates the optimization process in a much
more flexible and balanced manner. Firstly, we design a novel Differential
Modality Aware Fusion (DMAF) module to make the two modalities complement each
other. Secondly, an illumination aware feature alignment module selects
complementary features according to the illumination conditions and aligns the
two modality features adaptively. Extensive experimental results demonstrate
MBNet outperforms the state-of-the-arts on both the challenging KAIST and
CVC-14 multispectral pedestrian datasets in terms of the accuracy and the
computational efficiency. Code is available at
https://github.com/CalayZhou/MBNet