3 research outputs found
An Adaptive Pose Fusion Method for Indoor Map Construction
The vision-based robot pose estimation and mapping system has the disadvantage of low pose estimation accuracy and poor local detail mapping effects, while the modeling environment has poor features, high dynamics, weak light, and multiple shadows, among others. To address these issues, we propose an adaptive pose fusion (APF) method to fuse the robot’s pose and use the optimized pose to construct an indoor map. Firstly, the proposed method calculates the robot’s pose by the camera and inertial measurement unit (IMU), respectively. Then, the pose fusion method is adaptively selected according to the motion state of the robot. When the robot is in a static state, the proposed method directly uses the extended Kalman filter (EKF) method to fuse camera and IMU data. When the robot is in a motive state, the weighted coefficient is determined according to the matching success rate of the feature points, and the weighted pose fusion (WPF) method is used to fuse camera and IMU data. According to the different states, a series of new poses of the robot are obtained. Secondly, the fusion optimized pose is used to correct the distance and azimuth angle of the laser points obtained by LiDAR, and a Gauss–Newton iterative matching process is used to match the corresponding laser points to construct an indoor map. Finally, a pose fusion experiment is designed, and the EuRoc data and the measured data are used to verify the effectiveness of this method. The experimental results confirm that this method provides higher pose estimation accuracy compared with the robust visual inertial odometry (ROVIO) and visual-inertial ORB-SLAM (VI ORB-SLAM) algorithms. Compared with the Cartographer algorithm, this method provides higher two-dimensional map modeling accuracy and modeling performance