8,322 research outputs found

    The development of the Canadian Mobile Servicing System Kinematic Simulation Facility

    Get PDF
    Canada will develop a Mobile Servicing System (MSS) as its contribution to the U.S./International Space Station Freedom. Components of the MSS will include a remote manipulator (SSRMS), a Special Purpose Dexterous Manipulator (SPDM), and a mobile base (MRS). In order to support requirements analysis and the evaluation of operational concepts related to the use of the MSS, a graphics based kinematic simulation/human-computer interface facility has been created. The facility consists of the following elements: (1) A two-dimensional graphics editor allowing the rapid development of virtual control stations; (2) Kinematic simulations of the space station remote manipulators (SSRMS and SPDM), and mobile base; and (3) A three-dimensional graphics model of the space station, MSS, orbiter, and payloads. These software elements combined with state of the art computer graphics hardware provide the capability to prototype MSS workstations, evaluate MSS operational capabilities, and investigate the human-computer interface in an interactive simulation environment. The graphics technology involved in the development and use of this facility is described

    Context-Aware Mobile Augmented Reality Visualization in Construction Engineering Education

    Get PDF
    Recent studies suggest that the number of students pursuing science, technology, engineering, and mathematics (STEM) degrees has been generally decreasing. An extensive body of research cites the lack of motivation and engagement in the learning process as a major underlying reason of this decline. It has been discussed that if properly implemented, instructional technology can enhance student engagement and the quality of learning. Therefore, the main goal of this research is to implement and assess effectiveness of augmented reality (AR)-based pedagogical tools on student learning. For this purpose, two sets of experiments were designed and implemented in two different construction and civil engineering undergraduate level courses at the University of Central Florida (UCF). The first experiment was designed to systematically assess the effectiveness of a context-aware mobile AR tool (CAM-ART) in real classroom-scale environment. This tool was used to enhance traditional lecture-based instruction and information delivery by augmenting the contents of an ordinary textbook using computer-generated three-dimensional (3D) objects and other virtual multimedia (e.g. sound, video, graphs). The experiment conducted on two separate control and test groups and pre- and post- performance data as well as student perception of using CAM-ART was collected through several feedback questionnaires. In the second experiment, a building design and assembly task competition was designed and conducted using a mobile AR platform. The pedagogical value of mobile AR-based instruction and information delivery to student learning in a large-scale classroom setting was also assessed and investigated. Similar to the first experiment, students in this experiment were divided into two control and test groups. Students\u27 performance data as well as their feedback, suggestions, and workload were systematically collected and analyzed. Data analysis showed that the mobile AR framework had a measurable and positive impact on students\u27 learning. In particular, it was found that students in the test group (who used the AR tool) performed slightly better with respect to certain measures and spent more time on collaboration, communication, and exchanging ideas in both experiments. Overall, students ranked the effectiveness of the AR tool very high and stated that it has a good potential to reform traditional teaching methods

    An evaluation of the Microsoft HoloLens for a manufacturing-guided assembly task

    Get PDF
    Many studies have confirmed the benefits of using Augmented Reality (AR) work instructions over traditional digital or paper instructions, but few have compared the effects of different AR hardware for complex assembly tasks. For this research, previously published data using Desktop Model Based Instructions (MBI), Tablet MBI, and Tablet AR instructions were compared to new assembly data collected using AR instructions on the Microsoft HoloLens Head Mounted Display (HMD). Participants completed a mock wing assembly task, and measures like completion time, error count, Net Promoter Score, and qualitative feedback were recorded. The HoloLens condition yielded faster completion times than all other conditions. HoloLens users also had lower error rates than those who used the non-AR conditions. Despite the performance benefits of the HoloLens AR instructions, users of this condition reported lower net promoter scores than users of the Tablet AR instructions. The qualitative data showed that some users thought the HoloLens device was uncomfortable and that the tracking was not always exact. Although the user feedback favored the Tablet AR condition, the HoloLens condition resulted in significantly faster assembly times. As a result, it is recommended to use the HoloLens for complex guided assembly instructions with minor changes, such as allowing the user to toggle the AR instructions on and off at will. The results of this paper can help manufacturing stakeholders better understand the benefits of different AR technology for manual assembly tasks

    Reducing redundancy of real time computer graphics in mobile systems

    Get PDF
    The goal of this thesis is to propose novel and effective techniques to eliminate redundant computations that waste energy and are performed in real-time computer graphics applications, with special focus on mobile GPU micro-architecture. Improving the energy-efficiency of CPU/GPU systems is not only key to enlarge their battery life, but also allows to increase their performance because, to avoid overheating above thermal limits, SoCs tend to be throttled when the load is high for a large period of time. Prior studies pointed out that the CPU and especially the GPU are the principal energy consumers in the graphics subsystem, being the off-chip main memory accesses and the processors inside the GPU the primary energy consumers of the graphics subsystem. First, we focus on reducing redundant fragment processing computations by means of improving the culling of hidden surfaces. During real-time graphics rendering, objects are processed by the GPU in the order they are submitted by the CPU, and occluded surfaces are often processed even though they will end up not being part of the final image. When the GPU realizes that an object or part of it is not going to be visible, all activity required to compute its color and store it has already been performed. We propose a novel architectural technique for mobile GPUs, Visibility Rendering Order (VRO), which reorders objects front-to-back entirely in hardware to maximize the culling effectiveness of the GPU and minimize overshading, hence reducing execution time and energy consumption. VRO exploits the fact that the objects in graphics animated applications tend to keep its relative depth order across consecutive frames (temporal coherence) to provide the feeling of smooth transition. VRO keeps visibility information of a frame, and uses it to reorder the objects of the following frame. VRO just requires adding a small hardware to capture the visibility information and use it later to guide the rendering of the following frame. Moreover, VRO works in parallel with the graphics pipeline, so negligible performance overheads are incurred. We illustrate the benefits of VRO using various unmodified commercial 3D applications for which VRO achieves 27% speed-up and 14.8% energy reduction on average. Then, we focus on avoiding redundant computations related to CPU Collision Detection (CD). Graphics applications such as 3D games represent a large percentage of downloaded applications for mobile devices and the trend is towards more complex and realistic scenes with accurate 3D physics simulations. CD is one of the most important algorithms in any physics kernel since it identifies the contact points between the objects of a scene and determines when they collide. However, real-time accurate CD is very expensive in terms of energy consumption. We propose Render Based Collision Detection (RBCD), a novel energy-efficient high-fidelity CD scheme that leverages some intermediate results of the rendering pipeline to perform CD, so that redundant tasks are done just once. Comparing RBCD with a conventional CD completely executed in the CPU, we show that its execution time is reduced by almost three orders of magnitude (600x speedup), because most of the CD task of our model comes for free by reusing the image rendering intermediate results. Although not necessarily, such a dramatic time improvement may result in better frames per second if physics simulation stays in the critical path. However, the most important advantage of our technique is the enormous energy savings that result from eliminating a long and costly CPU computation and converting it into a few simple operations executed by a specialized hardware within the GPU. Our results show that the energy consumed by CD is reduced on average by a factor of 448x (i.e., by 99.8\%). These dramatic benefits are accompanied by a higher fidelity CD analysis (i.e., with finer granularity), which improves the quality and realism of the application.El objetivo de esta tesis es proponer técnicas efectivas y originales para eliminar computaciones inútiles que aparecen en aplicaciones gráficas, con especial énfasis en micro-arquitectura de GPUs. Mejorar la eficiencia energética de los sistemas CPU/GPU no es solo clave para alargar la vida de la batería, sino también incrementar su rendimiento. Estudios previos han apuntado que la CPU y especialmente la GPU son los principales consumidores de energía en el sub-sistema gráfico, siendo los accesos a memoria off-chip y los procesadores dentro de la GPU los principales consumidores de energía del sub-sistema gráfico. Primero, nos hemos centrado en reducir computaciones redundantes de la fase de fragment processing mediante la mejora en la eliminación de superficies ocultas. Durante el renderizado de gráficos en tiempo real, los objetos son procesados por la GPU en el orden en el que son enviados por la CPU, y las superficies ocultas son a menudo procesadas incluso si no no acaban formando parte de la imagen final. Cuando la GPU averigua que el objeto o parte de él no es visible, toda la actividad requerida para computar su color y guardarlo ha sido realizada. Proponemos una técnica arquitectónica original para GPUs móviles, Visibility Rendering Order (VRO), la cual reordena los objetos de delante hacia atrás por completo en hardware para maximizar la efectividad del culling de la GPU y así minimizar el overshading, y por lo tanto reducir el tiempo de ejecución y el consumo de energía. VRO explota el hecho de que los objetos de las aplicaciones gráficas animadas tienden a mantener su orden relativo en profundidad a través de frames consecutivos (coherencia temporal) para proveer animaciones con transiciones suaves. Dado que las relaciones de orden en profundidad entre objetos son testeadas en la GPU, VRO introduce costes mínimos en energía. Solo requiere añadir una pequeña unidad hardware para capturar la información de visibilidad. Además, VRO trabaja en paralelo con el pipeline gráfico, por lo que introduce costes insignificantes en tiempo. Ilustramos los beneficios de VRO usango varias aplicaciones 3D comerciales para las cuales VRO consigue un 27% de speed-up y un 14.8% de reducción de energía en media. En segundo lugar, evitamos computaciones redundantes relacionadas con la Detección de Colisiones (CD) en la CPU. Las aplicaciones gráficas animadas como los juegos 3D representan un alto porcentaje de las aplicaciones descargadas en dispositivos móviles y la tendencia es hacia escenas más complejas y realistas con simulaciones físicas 3D precisas. La CD es uno de los algoritmos más importantes entre los kernel de físicas dado que identifica los puntos de contacto entre los objetos de una escena. Sin embargo, una CD en tiempo real y precisa es muy costosa en términos de consumo energético. Proponemos Render Based Collision Detection (RBCD), una técnica energéticamente eficiente y preciso de CD que utiliza resultados intermedios del rendering pipeline para realizar la CD. Comparando RBCD con una CD convencional completamente ejecutada en la CPU, mostramos que el tiempo de ejecución es reducido casi tres órdenes de magnitud (600x speedup), porque la mayoría de la CD de nuestro modelo reusa resultados intermedios del renderizado de la imagen. Aunque no es así necesariamente, esta espectacular en tiempo puede resultar en mejores frames por segundo si la simulación de físicas está en el camino crítico. Sin embargo, la ventaja más importante de nuestra técnica es el enorme ahorro de energía que resulta de eliminar las largas y costosas computaciones en la CPU, sustituyéndolas por unas pocas operaciones ejecutadas en un hardware especializado dentro de la GPU. Nuestros resultados muestran que la energía consumida por la CD es reducidad en media por un factor de 448x. Estos dramáticos beneficios vienen acompañados de una mayor fidelidad en la CD (i.e. con granularidad más fina)Postprint (published version

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    Comparative study of AR versus video tutorials for minor maintenance operations

    Full text link
    [EN] Augmented Reality (AR) has become a mainstream technology in the development of solutions for repair and maintenance operations. Although most of the AR solutions are still limited to specific contexts in industry, some consumer electronics companies have started to offer pre-packaged AR solutions as alternative to video-based tutorials (VT) for minor maintenance operations. In this paper, we present a comparative study of the acquired knowledge and user perception achieved with AR and VT solutions in some maintenance tasks of IT equipment. The results indicate that both systems help users to acquire knowledge in various aspects of equipment maintenance. Although no statistically significant differences were found between AR and VT solutions, users scored higher on the AR version in all cases. Moreover, the users explicitly preferred the AR version when evaluating three different usability and satisfaction criteria. For the AR version, a strong and significant correlation was found between the satisfaction and the achieved knowledge. Since the AR solution achieved similar learning results with higher usability scores than the video-based tutorials, these results suggest that AR solutions are the most effective approach to substitute the typical paper-based instructions in consumer electronics.This work has been supported by Spanish MINECO and EU ERDF programs under grant RTI2018-098156-B-C55.Morillo, P.; García García, I.; Orduña, JM.; Fernández, M.; Juan, M. (2020). Comparative study of AR versus video tutorials for minor maintenance operations. Multimedia Tools and Applications. 79(11-12):7073-7100. https://doi.org/10.1007/s11042-019-08437-9S707371007911-12Ahn J, Williamson J, Gartrell M, Han R, Lv Q, Mishra S (2015) Supporting healthy grocery shopping via mobile augmented reality. ACM Trans Multimedia Comput Commun Appl 12(1s):16:1–16:24. https://doi.org/10.1145/2808207Anderson TW (2011) Anderson–darling tests of goodness-of-fit. Springer, Berlin, pp 52–54. https://doi.org/10.1007/978-3-642-04898-2_118Awad N, Lewandowski SE, Decker EW (2015) Event management system for facilitating user interactions at a venue. US Patent App. 14/829,382Azuma R (1997) A survey of augmented reality. Presence: Teleoperators and Virtual Environments 6(4):355–385Baird K, Barfield W (1999) Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality 4:250–259Ballo P (2018) Hardware and software for ar/vr development. In: Augmented and virtual reality in libraries, pp 45–55. LITA guidesBarrile V, Fotia A, Bilotta G (2018) Geomatics and augmented reality experiments for the cultural heritage. Applied Geomatics. https://doi.org/10.1007/s12518-018-0231-5Billinghurst M, Duenser A (2012) Augmented reality in the classroom. Computer 45(7):56–63. https://doi.org/10.1109/MC.2012.111Bowman DA, McMahan RP (2007) Virtual reality: how much immersion is enough? Computer 40(7)Brown TA (2015) Confirmatory factor analysis for applied research. Guilford PublicationsDodge Y. (ed) (2008) Kruskal-Wallis test. Springer, New York. https://doi.org/10.1007/978-0-387-32833-1_216Elmunsyah H, Hidayat WN, Asfani K (2019) Interactive learning media innovation: utilization of augmented reality and pop-up book to improve user’s learning autonomy. J Phys Conf Ser 1193(012):031. https://doi.org/10.1088/1742-6596/1193/1/012031Entertainment L (2017) Dolphin Player. https://play.google.com/store/apps/details?id=com.broov.player. Online; Accessed 09-September-2017Fletcher J, Belanich J, Moses F, Fehr A, Moss J (2017) Effectiveness of augmented reality & augmented virtuality. In: MODSIM Modeling & simulation of systems and applications) world conferenceFraga-Lamas P, Fernández-Caramés TM, Blanco-Novoa O, Vilar-Montesinos MA (2018) A review on industrial augmented reality systems for the industry 4.0 shipyard. IEEE Access 6:13,358–13,375. https://doi.org/10.1109/ACCESS.2018.2808326Furió D, Juan MC, Seguí I, Vivó R (2015) Mobile learning vs. traditional classroom lessons: a comparative study. J Comput Assist Learn 31(3):189–201. https://doi.org/10.1111/jcal.12071Gavish N, Gutiérrez T, Webel S, Rodríguez J, Peveri M, Bockholt U, Tecchia F (2015) Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact Learn Environ 23(6):778–798. https://doi.org/10.1080/10494820.2013.815221Gimeno J, Morillo P, Orduña JM, Fernández M (2013) A new ar authoring tool using depth maps for industrial procedures. Comput Ind 64(9):1263–1271. https://doi.org/10.1016/j.compind.2013.06.012Holzinger A, Kickmeier-Rust MD, Albert D (2008) Dynamic media in computer science education; content complexity and learning performance: is less more? Educational Technology & Society 11(1):279–290Hornbaek K (2013) Some whys and hows of experiments in human–computer interaction. Foundations and TrendsⓇ in Human–Computer Interaction 5(4):299–373. https://doi.org/10.1561/1100000043Huang J, Liu S, Xing J, Mei T, Yan S (2014) Circle & search: Attribute-aware shoe retrieval. ACM Trans Multimedia Comput Commun Appl 11 (1):3:1–3:21. https://doi.org/10.1145/2632165Jiang S, Wu Y, Fu Y (2018) Deep bidirectional cross-triplet embedding for online clothing shopping. ACM Trans Multimedia Comput Commun Appl 14(1):5:1–5:22. https://doi.org/10.1145/3152114Kim SK, Kang SJ, Choi YJ, Choi MH, Hong M (2017) Augmented-reality survey: from concept to application. KSII Transactions on Internet and Information Systems 11:982–1004. https://doi.org/10.3837/tiis.2017.02.019Langlotz T, Zingerle M, Grasset R, Kaufmann H, Reitmayr G (2012) Ar record&replay: Situated compositing of video content in mobile augmented reality. In: Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI ’12. ACM, New York, pp 318–326, DOI https://doi.org/10.1145/2414536.2414588, (to appear in print)Martin-SanJose JF, Juan MC, Mollá R, Vivó R (2017) Advanced displays and natural user interfaces to support learning. Interact Learn Environ 25(1):17–34. https://doi.org/10.1080/10494820.2015.1090455Massey FJ (1951) The kolmogorov-Smirnov test for goodness of fit. J Am Stat Assoc 46(253):68–78van der Meij H, van der Meij J, Voerman T, Duipmans E (2018) Supporting motivation, task performance and retention in video tutorials for software training. Educ Technol Res Dev 66(3):597–614. https://doi.org/10.1007/s11423-017-9560-zvan der Meij J, van der Meij H (2015) A test of the design of a video tutorial for software training. J Comput Assist Learn 31 (2):116–132. https://doi.org/10.1111/jcal.12082Mestre LS (2012) Student preference for tutorial design: a usability study. Ref Serv Rev 40(2):258–276. https://doi.org/10.1108/00907321211228318Mohr P, Kerbl B, Donoser M, Schmalstieg D, Kalkofen D (2015) Retargeting technical documentation to augmented reality. In: Proceedings of the 33rd annual ACM conference on human factors in computing systems, CHI ’15. ACM, New York, pp 3337–3346, DOI https://doi.org/10.1145/2702123.2702490, (to appear in print)Mohr P, Mandl D, Tatzgern M, Veas E, Schmalstieg D, Kalkofen D (2017) Retargeting video tutorials showing tools with surface contact to augmented reality. In: Proceedings of the 2017 CHI conference on human factors in computing systems, CHI ’17. ACM, New York, pp 6547–6558, DOI https://doi.org/10.1145/3025453.3025688, (to appear in print)Montgomery DC, Runger GC (2003) Applied statistics and probability for engineers. Wiley, New YorkMorillo P, Orduña JM, Casas S, Fernández M (2019) A comparison study of ar applications versus pseudo-holographic systems as virtual exhibitors for luxury watch retail stores. Multimedia Systems. https://doi.org/10.1007/s00530-019-00606-yMorse JM (2000) Determining sample size. Qual Health Res 10(1):3–5. https://doi.org/10.1177/104973200129118183Muñoz-Montoya F, Juan M, Mendez-Lopez M, Fidalgo C (2019) Augmented reality based on slam to assess spatial short-term memory. IEEE Access 7:2453–2466. https://doi.org/10.1109/ACCESS.2018.2886627Neuhäuser M (2011) Wilcoxon–Mann–Whitney test. Springer, Berlin, pp 1656–1658Neumann U, Majoros A (1998) Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. In: Inproceedings of the IEEE virtual reality annual international symposium (VR ’98), pp 4–11no JJA, Juan MC, Gil-Gómez JA, Mollá R. (2014) A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour & Information Technology 33(6):646–655. https://doi.org/10.1080/0144929X.2013.815277Palmarini R, Erkoyuncu JA, Roy R, Torabmostaedi H (2018) A systematic review of augmented reality applications in maintenance. Robot Comput Integr Manuf 49:215–228Quint F, Loch F (2015) Using smart glasses to document maintenance processes. Mensch und Computer 2015–WorkshopbandRadkowski R, Herrema J, Oliver J (2015) Augmented reality-based manual assembly support with visual features for different degrees of difficulty. International Journal of Human–Computer Interaction 31(5):337–349. https://doi.org/10.1080/10447318.2014.994194Regenbrecht H, Schubert T (2002) Measuring presence in augmented reality environments: design and a first test of a questionnaire, Porto, PortugalRobertson J (2012) Likert-type scales, statistical methods, and effect sizes. Commun ACM 55(5):6–7. https://doi.org/10.1145/2160718.2160721Rodríguez-Andrés D, Juan MC, Méndez-López M, Pérez-Hernández E, Lluch J (2016) Mnemocity task: Assessment of childrens spatial memory using stereoscopy and virtual environments. PLos ONE 1(8). https://doi.org/10.1371/journal.pone.0161858Sanna A, Manuri F, Lamberti F, Paravati G, Pezzolla P (2015) Using handheld devices to support augmented reality-based maintenance and assembly tasks. In: 2015 IEEE International conference on consumer electronics (ICCE), pp. 178–179. https://doi.org/10.1109/ICCE.2015.7066370Schmidt S, Ehrenbrink P, Weiss B, Voigt-Antons J, Kojic T, Johnston A, Moller S (2018) Impact of virtual environments on motivation and engagement during exergames. In: 2018 Tenth international conference on quality of multimedia experience (qoMEX), pp 1–6. https://doi.org/10.1109/QoMEX.2018.8463389Shapiro SS, Wilk MB (1965) An analysis of variance test for normality (complete samples). Biometrika 52(3/4):591–611Tang A, Owen C, Biocca F, Mou W (2003) Comparative effectiveness of augmented reality in object assembly. In: Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’03. ACM, New York, pp 73–80, DOI https://doi.org/10.1145/642611.642626, (to appear in print)Tomás JM, Oliver A, Galiana L, Sancho P, Lila M (2013) Explaining method effects associated with negatively worded items in trait and state global and domain-specific self-esteem scales. Structural Equation Modeling: A Multidisciplinary Journal 20(2):299–313. https://doi.org/10.1080/10705511.2013.769394Uva AE, Gattullo M, Manghisi VM, Spagnulo D, Cascella GL, Fiorentino M (2017) Evaluating the effectiveness of spatial augmented reality in smart manufacturing: a solution for manual working stations. The Int J Adv Manuf Technol: 1–13Wang X, Ong SK, Nee AYC (2016) A comprehensive survey of augmented reality assembly research. Advances in Manufacturing 4(1):1–22. https://doi.org/10.1007/s40436-015-0131-4Westerfield G, Mitrovic A, Billinghurst M (2015) Intelligent augmented reality training for motherboard assembly. Int J Artif Intell Educ 25(1):157–172. https://doi.org/10.1007/s40593-014-0032-xWiedenmaier S, Oehme O, Schmidt L, Luczak H (2003) Augmented reality (ar) for assembly processes - design and experimental evaluation. International Journal of Human-Computer Interaction 16(3):497–514Witmer BG, Singer MJ (1998) Measuring presence in virtual environments: a presence questionnaire. Presence: Teleoperators and Virtual Environments 7(3):225–240Wu HK, Lee SWY, Chang HY, Liang JC (2013) Current status, opportunities and challenges of augmented reality in education. Computers & Education 62:41–49. https://doi.org/10.1016/j.compedu.2012.10.024Yim MYC, Chu SC, Sauer PL (2017) Is augmented reality technology an effective tool for e-commerce? an interactivity and vividness perspective. Journal of Interactive Marketing 39(http://www.sciencedirect.com/science/article/pii/S1094996817300336):89–103. https://doi.org/10.1016/j.intmar.2017.04.001Yuan ML, Ong SK, Nee AYC (2008) Augmented reality for assembly guidance using a virtual interactive tool. Int J Prod Res 46(7):1745–1767. https://doi.org/10.1080/0020754060097293
    • …
    corecore