4 research outputs found
Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons
Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous
Localization and Mapping (SLAM) systems draws increasing attention to a global
and continuous localization solution. Nonetheless, in dense urban environments,
GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS)
measurements, which might lead to a sharp deterioration in localization
results. In this paper, we propose to detect the sky area from the up-looking
camera to improve GNSS measurement reliability for more accurate position
estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based
on a recent work called GVINS. Specifically, we adopt a global threshold method
to segment the sky regions and non-sky regions in the fish-eye sky-pointing
image and then project satellites to the image using the geometric relationship
between satellites and the camera. After that, we reject satellites in non-sky
regions to eliminate NLOS signals. We investigated various segmentation
algorithms for sky detection and found that the Otsu algorithm reported the
highest classification rate and computational efficiency, despite the
algorithm's simplicity and ease of implementation. To evaluate the
effectiveness of Sky-GVINS, we built a ground robot and conducted extensive
real-world experiments on campus. Experimental results show that our method
improves localization accuracy in both open areas and dense urban environments
compared to the baseline method. Finally, we also conduct a detailed analysis
and point out possible further directions for future research. For detailed
information, visit our project website at
https://github.com/SJTU-ViSYS/Sky-GVINS
Collaborative SLAM based on WiFi fingerprint similarity and motion information
Abstract
Simultaneous localization and mapping (SLAM) has been extensively researched in past years particularly with regard to range-based or visual-based sensors. Instead of deploying dedicated devices that use visual features, it is more pragmatic to exploit the radio features to achieve this task, due to their ubiquitous nature and the widespread deployment of the Wi-Fi wireless network. This article presents a novel approach for collaborative simultaneous localization and radio fingerprint mapping (C-SLAM-RF) in large unknown indoor environments. The proposed system uses received signal strengths (RSS) from Wi-Fi access points (APs) in the existing infrastructure and pedestrian dead reckoning (PDR) from a smartphone, without a prior knowledge about map or distribution of AP in the environment. We claim a loop closure based on the similarity of the two radio fingerprints. To further improve the performance, we incorporate the turning motion and assign a small uncertainty value to a loop closure if a matched turning is identified. The experiment was done in an area of 130 m by 70 m and the results show that our proposed system is capable of estimating the tracks of four users with an accuracy of 0.6 m with Tango-based PDR and 4.76 m with a step counter-based PDR