3 research outputs found

    Novel technique for Multi Sensor Calibration of a UAV

    No full text
    MMS or mobile mapping systems is an integrated system consisting of various sensors such as Cameras, LIDAR, IMU and GPS modules fitted on the top of a mobile platform (usually a car) and is used to get accurate situational awareness of the environment around the vehicle. It acquires 3D information of road surfaces, signage, guard rails, lettering on road surfaces, manholes, and buildings surrounding the road while the vehicle is travelling. It also makes 3D GIS (Geographic Information Science) map data and subsequently makes vehicle motion model simulation so as to achieve safe driving and generation of maps for development purposes. But in the case of a UAV, the process of getting the sensor information becomes complex as pitch, roll and yaw motion of the drone are involved (unlike a car). So, in order to accurately understand, interpret and visualise the data, we perform calibration (Bore sight, Level arm and Time delay) of all the sensors present on board. This helps to reduce the biases present in the data to a minimum and helps to get accurate information about the environment. In this paper, we have presented Level arm calibration and a novel technique for Bore sight calibration of sensors on-board a UAV. We have also shown the difference in estimated and ground truth trajectory which arises due to improper calibration

    A Comparative analysis of Algorithms for Pedestrian Tracking using Drone Vision

    No full text
    In recent years, there has been an upsurge in the use of drones for various applications such as intelligent transportation, smart agriculture, military, product delivery, etc. With the advancement of high computational edge devices which can support Machine Learning and Deep Learning algorithms, various functions such as object detection and object tracking can be performed in real-time. Though there are many tracking algorithms available for object tracking, there is always a tradeoff between accuracy and their run-time. Executing computationally expensive algorithms is largely bottle-necked by hardware constraints. This paper has compared different object tracking algorithms (both conventional and Deep learning-based) based on tracking accuracy, speed of tracking, and computational complexity of each algorithm. The comparison is based on the accuracy of detection and tracking of an object at the beginning, end, or time of any occlusion scenario in the drone's video. © 2021 IEEE

    Comparative Analysis of Depth Detection Algorithms using Stereo Vision

    No full text
    In recent years, the use of unmanned aerial vehicles in various domains has increased exponentially. Drones are being extensively used in the fields of Agriculture, Transportation, military, etc. Different sensors are being integrated into the drones depending upon the application. Lately, LIDAR sensors are being integrated on the drone for acquiring depth-related information. Though these sensors have advantages, they are very costly and do not perform very well under high sun angles. Stereo cameras can be mounted on drones to get depth perception of the obstacles as they are cheaper and efficient. In this paper, we have developed and compared two different algorithms ( Conventional and deep learning-based) for realtime depth detection of the obstacle using stereo vision with the intention to mount the stereo camera on drone in future. The comparison is based on accuracy, range of operation and load incurred by the algorithm on the system. The coefficient of determination (R 2) and correlation coefficient has also been calculated which shows that Algorithm1 exhibits correlation and R 2 value of 0:9985 and 0:9971 respectively. This is considerably higher than Algorithm2 whose values are around 0:8779 and 0:7707 respectively
    corecore