6 research outputs found

    A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles

    No full text
    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization

    Autonomous Driving Segway Robots

    Full text link
    In this thesis, an autonomous driving robot has been proposed and built based on a two-wheel Segway self-balancing scooter. Sensors including LiDAR, camera, encoder, and IMU were implemented together with digital servos as actuators. The robot was tested simultaneously with the functionality features including obstacle avoidance based on fuzzy logic and 2D grid map, data fusion based on co-calibration, 2D simultaneously localization and mapping (SLAM) and path planning under different scenarios both indoor and outdoor. As a result, the robot initially has the ability of self-exploration with avoiding obstacles and constructing 2D grid map simultaneously. A simulation of the robot with same functionalities except data fusion has also been tested and performed based on robot operating system (ROS) and Gazebo as the simple comparison of the robot in real world.MSEElectrical Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167349/1/Jiaming Liu - Final Thesis.pd

    Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

    Get PDF
    Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems

    3D INDOOR STATE ESTIMATION FOR RFID-BASED MOTION-CAPTURE SYSTEMS

    Get PDF
    The objective of this research is to realize 3D indoor state estimation for RFID-based motion-capture systems. The state estimation is based on sensor fusion by combining RF signal with IMU data together. 3D state-space model of sensor fusion and 3D nonlinear state estimation in NLE with both asynchronous and synchronous models to handle different sensor sampling rates were proposed. For 3D motion with indoor multipath, RMS error before estimation is 71.99 cm, in which 34.99 cm in xy- plane and 62.92 cm along z- axis. After NLE estimation using RF signal combined with IMU data, RMS error of 3D coordinates decreases to 31.90 cm, with 22.50 cm in xy- plane and 22.61 cm along z- axis, achieving a factor of 2 enhancement which is similar to the 2D estimation. In addition, using RF signal only obtains similar estimation results to using both RF and IMU, i.e., 3D RMS error of 31.90 cm, where 22.48 cm in xy- plane and 22.62 cm along z- axis. Hence, RF signal only is able to achieve fine-scale RFID-based motion capture in 3D motion, in consistency with the conclusion arrived at in 2D estimation. In this way, RFID-based motion capture systems can be simplified from embedding inertial sensors. EKF derives close results with 2 cm larger RMS error. In addition, ToF based position sensor in tracking achieves comparable and higher accuracy compared to RSS based position sensor based on the multipath simulation model, enabling ToF to be applied in fine-scale motion capture and tracking.Ph.D
    corecore