2 research outputs found

    Mobile based augmented reality for flexible human height estimation using touch and motion gesture interaction

    Get PDF
    Human height measurement can be achieved by using contact or non-contact techniques. Contact technique is the traditional measuring method which required human resources to perform the measurement. In contrast, for non-contact technique, several kinds of research for measurement have been conducted, mostly with image-processing methods and only a few with the Augmented Reality (AR) approach. The current measuring approaches mostly required external hardware such as laser pointer or artificial fiducial such as 2D markers. In this paper, the world tracking technique and Visual Inertial Odometry is the method used to estimate the human height. The main aim of this paper is to accurately estimate the human height using augmented reality (non-contacted measurements). The methodology used the Apple ARKit plugin, which is the software development tools to build an augmented reality application for IOS device. An algorithm was designed by using Golden Ratio rules to estimate human height from the lower part of human knee; The estimation result is displayed using AR technology to allow the justification of the accuracy of the result. The application is tested with four different measuring methods. The normal full-height measurement result had a 1.13cm (0.73%) bias and a 1.34cm (0.88%) Root Mean Square Error (RMSE); the self-full height measurement had a result of 0.89cm (0.58%) bias and a 1.27cm (0.83%) RMSE; the normal height estimation from the lower part of knee measurement had a result of 0.12cm (0.06%) bias and a 1.34cm (0.89%) RMSE; the self-height estimation from the lower part of knee measurement had a result of 0.15cm (0.09%) bias and a 1.04cm (0.66%) RMSE. The results show that the mobile phone with VIO can be a potential tool for obtaining accurate measurements of human height

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot
    corecore