Semantic scene understanding with Minimalist Optical Systems (MOS) in mobile
and wearable applications remains a challenge due to the corrupted imaging
quality induced by optical aberrations. However, previous works only focus on
improving the subjective imaging quality through computational optics, i.e.
Computational Imaging (CI) technique, ignoring the feasibility in semantic
segmentation. In this paper, we pioneer to investigate Semantic Segmentation
under Optical Aberrations (SSOA) of MOS. To benchmark SSOA, we construct
Virtual Prototype Lens (VPL) groups through optical simulation, generating
Cityscapes-ab and KITTI-360-ab datasets under different behaviors and levels of
aberrations. We look into SSOA via an unsupervised domain adaptation
perspective to address the scarcity of labeled aberration data in real-world
scenarios. Further, we propose Computational Imaging Assisted Domain Adaptation
(CIADA) to leverage prior knowledge of CI for robust performance in SSOA. Based
on our benchmark, we conduct experiments on the robustness of state-of-the-art
segmenters against aberrations. In addition, extensive evaluations of possible
solutions to SSOA reveal that CIADA achieves superior performance under all
aberration distributions, paving the way for the applications of MOS in
semantic scene understanding. Code and dataset will be made publicly available
at https://github.com/zju-jiangqi/CIADA.Comment: Code and dataset will be made publicly available at
https://github.com/zju-jiangqi/CIAD