3 research outputs found

    Visualisation of a three-dimensional (3D) object’s optimal reality in a 3D map on a mobile device

    Get PDF
    Prior research on the subject of visualisation of three-dimensional (3D) objects by coordinate systems has proved that all objects are translated so that the eye is at the origin (eye space). The multiplication of a point in eye space leads to perspective space, and dividing perspective space leads to screen space. This paper utilised these findings and investigated the key factor(s) in the visualisation of 3D objects within 3D maps on mobile devices. The motivation of the study comes from the fact that there is a disparity between 3D objects within a 3D map on a mobile device and those on other devices; this difference might undermine the capabilities of a 3D map view on a mobile device. This concern arises while interacting with a 3D map view on a mobile device. It is unclear whether an increasing number of users will be able to identify the real world as the 3D map view on a mobile device becomes more realistic. We used regression analysis intended to rigorously explain the participants’ responses and the Decision Making Trial and Evaluation Laboratory method (DEMATEL) to select the key factor(s) that caused or were affected by 3D object views. The results of regression analyses revealed that eye space, perspective space and screen space were associated with 3D viewing of 3D objects in 3D maps on mobile devices and that eye space had the strongest impact. The results of DEMATEL using its original and revised version steps showed that the prolonged viewing of 3D objects in a 3D map on mobile devices was the most important factor for eye space and a long viewing distance was the most significant factor for perspective space, while large screen size was the most important factor for screen space. In conclusion, a 3D map view on a mobile device allows for the visualisation of a more realistic environment

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe

    Diminished reality using appearance and 3D geometry of internet photo collections

    No full text
    10.1109/ISMAR.2013.66717592013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 201311-1
    corecore