Sky Segmentation of Fisheye Images for Identifying Non-Line-of-Sight Satellites

Abstract

GNSS (global navigation satellite system) receivers are often deployed in environments where some satellite signals are blocked by buildings and other obstructions. This non-line-of-sight situation is challenging for GNSS positioning because the signals can still be received via indirect paths, which causes errors to the calculated position. Knowledge of the blocked satellites would help in mitigating these errors. A sky-pointing fisheye camera can be used to gather information of the surroundings of the receiver in order to detect non-line-of-sight situations. Using semantic segmentation of the sky, the image can be segmented into line-of-sight and non-line-of-sight regions. By projecting the satellite locations onto the image, each satellite can be classified according to the segmentation. The objective of this thesis is to study the use of neural networks in segmenting the sky from fisheye images and to classify the possibly visible satellites as line-of-sight and non-line-of-sight based on the segmentation. Several popular segmentation networks were trained and evaluated to compare their performance on the task. A manually labeled, small dataset was prepared, containing images with different weather conditions and environments, including tunnels. The results were validated on a larger test set using GNSS data. The study shows that neural networks can segment the sky from fisheye images very precisely, reaching almost 99% intersection over union and over 99% F1-score. The best-performing model was a U-Net with EfficientNetB6 encoder, but there was little difference between the tested models. The satellite classification performed after the segmentation was also accurate and in line with the signal strengths. It can be concluded on the basis of the study that fisheye sky segmentation with neural networks is an effective and useful method for line-of-sight detection

    Similar works