200 research outputs found
A comparative study using an autostereoscopic display with augmented and virtual reality
Advances in display devices are facilitating the integration of stereoscopic visualization in our daily lives. However, autostereoscopic visualization has not been extensively exploited. In this paper, we present a system that combines Augmented Reality (AR) and autostereoscopic visualization. We also present the first study that compares different aspects using an autostereoscopic display with AR and VR, in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualization are possibilities for developing edutainment systems for childrenThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02). We would like to thank the following for their contributions: AIJU, the "Escola d'Estiu" and especially Ignacio Segui, Juan Cano, Miguelon Gimenez, and Javier Irimia. This work would not have been possible without their collaboration. The ALF3D project (TIN2009-14103-03) for the autostereoscopic display. Roberto Vivo, Rafa Gaitan, Severino Gonzalez, and M. Jose Vicent, for their help. The children's parents who signed the agreement to allow their children to participate in the study. The children who participated in the study. The ETSInf for letting us use its facilities during the testing phase.Arino, J.; Juan Lizandra, MC.; Gil Gómez, JA.; Mollá Vayá, RP. (2014). A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour and Information Technology. 33(6):646-655. https://doi.org/10.1080/0144929X.2013.815277S646655336Azuma, R. T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, 6(4), 355-385. doi:10.1162/pres.1997.6.4.355Blum, T.et al. 2012. Mirracle: augmented reality in-situ visualization of human anatomy using a magic mirror.In: IEEE virtual reality workshops, 4–8 March 2012, Costa Mesa, CA, USA. Washington, DC: IEEE Computer Society, 169–170.Botden, S. M. B. I., Buzink, S. N., Schijven, M. P., & Jakimowicz, J. J. (2007). Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference? World Journal of Surgery, 31(4), 764-772. doi:10.1007/s00268-006-0724-yChittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training: Motivations, issues, opportunities. Computers & Education, 49(1), 3-18. doi:10.1016/j.compedu.2005.06.002Dodgson, N. A. (2005). Autostereoscopic 3D displays. Computer, 38(8), 31-36. doi:10.1109/mc.2005.252Ehara, J., & Saito, H. (2006). Texture overlay for virtual clothing based on PCA of silhouettes. 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2006.297805Eisert, P., Fechteler, P., & Rurainsky, J. (2008). 3-D Tracking of shoes for Virtual Mirror applications. 2008 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2008.4587566Fiala, M. (2007). Magic Mirror System with Hand-held and Wearable Augmentations. 2007 IEEE Virtual Reality Conference. doi:10.1109/vr.2007.352493Froner, B., Holliman, N. S., & Liversedge, S. P. (2008). A comparative study of fine depth perception on two-view 3D displays. Displays, 29(5), 440-450. doi:10.1016/j.displa.2008.03.001Holliman, N. S., Dodgson, N. A., Favalora, G. E., & Pockett, L. (2011). Three-Dimensional Displays: A Review and Applications Analysis. IEEE Transactions on Broadcasting, 57(2), 362-371. doi:10.1109/tbc.2011.2130930Ilgner, J. F. R., Kawai, T., Shibata, T., Yamazoe, T., & Westhofen, M. (2006). Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Stereoscopic Displays and Virtual Reality Systems XIII. doi:10.1117/12.647591Jeong, J.-S., Park, C., Kim, M., Oh, W.-K., & Yoo, K.-H. (2011). Development of a 3D Virtual Laboratory with Motion Sensor for Physics Education. Ubiquitous Computing and Multimedia Applications, 253-262. doi:10.1007/978-3-642-20975-8_28Jones, J. A., Swan, J. E., Singh, G., Kolstad, E., & Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. Proceedings of the 5th symposium on Applied perception in graphics and visualization - APGV ’08. doi:10.1145/1394281.1394283Juan, M. C., & Pérez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Computers & Graphics, 34(6), 756-766. doi:10.1016/j.cag.2010.08.001Kaufmann, H., & Csisinko, M. (2011). Wireless Displays in Educational Augmented Reality Applications. Handbook of Augmented Reality, 157-175. doi:10.1007/978-1-4614-0064-6_6Kaufmann, H., & Meyer, B. (2008). Simulating educational physical experiments in augmented reality. ACM SIGGRAPH ASIA 2008 educators programme on - SIGGRAPH Asia ’08. doi:10.1145/1507713.1507717Konrad, J. (2011). 3D Displays. Optical and Digital Image Processing, 369-395. doi:10.1002/9783527635245.ch17Konrad, J., & Halle, M. (2007). 3-D Displays and Signal Processing. IEEE Signal Processing Magazine, 24(6), 97-111. doi:10.1109/msp.2007.905706Kwon, H., & Choi, H.-J. (2012). A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel. Stereoscopic Displays and Applications XXIII. doi:10.1117/12.907793Livingston, M. A., Zanbaka, C., Swan, J. E., & Smallman, H. S. (s. f.). Objective measures for the effectiveness of augmented reality. IEEE Proceedings. VR 2005. Virtual Reality, 2005. doi:10.1109/vr.2005.1492798Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers & Education, 50(4), 1339-1353. doi:10.1016/j.compedu.2006.12.008Montgomery, D. J., Woodgate, G. J., Jacobs, A. M. S., Harrold, J., & Ezra, D. (2001). Performance of a flat-panel display system convertible between 2D and autostereoscopic 3D modes. Stereoscopic Displays and Virtual Reality Systems VIII. doi:10.1117/12.430813Morphew, M. E., Shively, J. R., & Casey, D. (2004). Helmet-mounted displays for unmanned aerial vehicle control. Helmet- and Head-Mounted Displays IX: Technologies and Applications. doi:10.1117/12.541031Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20-28. doi:10.1016/j.cag.2005.10.004Petkov, E. G. (2010). Educational Virtual Reality through a Multiview Autostereoscopic 3D Display. Innovations in Computing Sciences and Software Engineering, 505-508. doi:10.1007/978-90-481-9112-3_86Shen, Y., Ong, S. K., & Nee, A. Y. C. (2011). Vision-Based Hand Interaction in Augmented Reality Environment. International Journal of Human-Computer Interaction, 27(6), 523-544. doi:10.1080/10447318.2011.555297Swan, J. E., Jones, A., Kolstad, E., Livingston, M. A., & Smallman, H. S. (2007). Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics, 13(3), 429-442. doi:10.1109/tvcg.2007.1035Urey, H., Chellappan, K. V., Erden, E., & Surman, P. (2011). State of the Art in Stereoscopic and Autostereoscopic Displays. Proceedings of the IEEE, 99(4), 540-555. doi:10.1109/jproc.2010.2098351Zhang, Y., Ji, Q., and Zhang, W., 2010. Multi-view autostereoscopic 3D display.In: International conference on optics photonics and energy engineering, 10–11 May 2010, Wuhan, China. Washington, DC: IEEE Computer Society, 58–61
Visuohaptic augmented feedback for enhancing motor skills acquisition
Serious games are accepted as an effective approach to deliver augmented feedback in motor (re-) learning processes. The multi-modal nature of the conventional computer games (e.g. audiovisual representation) plus the ability to interact via haptic-enabled inputs provides a more immersive experience. Thus, particular disciplines such as medical education in which frequent hands on rehearsals play a key role in learning core motor skills (e.g. physical palpations) may benefit from this technique. Challenges such as the impracticality of verbalising palpation experience by tutors and ethical considerations may prevent the medical students from correctly learning core palpation skills. This work presents a new data glove, built from off-the-shelf components which captures pressure sensitivity designed to provide feedback for palpation tasks. In this work the data glove is used to control a serious game adapted from the infinite runner genre to improve motor skill acquisition. A comparative evaluation on usability and effectiveness of the method using multimodal visualisations, as part of a larger study to enhance pressure sensitivity, is presented. Thirty participants divided into a game-playing group (n = 15) and a control group (n = 15) were invited to perform a simple palpation task. The game-playing group significantly outperformed the control group in which abstract visualisation of force was provided to the users in a blind-folded transfer test. The game-based training approach was positively described by the game-playing group as enjoyable and engaging
Building a Open Source Framework for Virtual Medical Training
This paper presents a framework to build medical training applications by using virtual reality and a tool that helps the class instantiation of this framework. The main purpose is to make easier the building of virtual reality applications in the medical training area, considering systems to simulate biopsy exams and make available deformation, collision detection, and stereoscopy functionalities. The instantiation of the classes allows quick implementation of the tools for such a purpose, thus reducing errors and offering low cost due to the use of open source tools. Using the instantiation tool, the process of building applications is fast and easy. Therefore, computer programmers can obtain an initial application and adapt it to their needs. This tool allows the user to include, delete, and edit parameters in the functionalities chosen as well as storing these parameters for future use. In order to verify the efficiency of the framework, some case studies are presented
From Physical to Virtual: Widening the Perspective on Multi-Agent Environments
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-23850-0_9Since more than a decade, the environment is seen as a key element when analyzing, developing or deploying Multi-Agent Systems (MAS) applications. Especially, for the development of multi-agent platforms it has become a key concept, similarly to many application in the area of location-based, distributed systems. An emerging, prominent application area for MAS is related to Virtual Environments. The underlying technology has evolved in a way, that these applications have grown out of science fiction novels till research papers and even real applications. Even more, current technologies enable MAS to be key components of such virtual environments.
In this paper, we widen the concept of the environment of a MAS to encompass new and mixed physical, virtual, simulated, etc. forms of environments. We analyze currently most interesting application domains based on three dimensions: the way different "realities" are mixed via the environment, the underlying natures of agents, the possible forms and sophistication of interactions. In addition to this characterization, we discuss how this widened concept of possible environments influences the support it can give for developing applications in the respective domains.Carrascosa Casamayor, C.; Klugl, F.; Ricci, A.; Boissier, O. (2015). From Physical to Virtual: Widening the Perspective on Multi-Agent Environments. En Agent Environments for Multi-Agent Systems IV. 4th International Workshop, E4MAS 2014 - 10 Years Later, Paris, France, May 6, 2014. 133-146. https://doi.org/10.1007/978-3-319-23850-0_9S133146Aggarwal, J.K., Ryoo, M.S.: Human activity analysis: a review. ACM Comput. Surv. 43(3), 16:1–16:43 (2011)Argente, E., Boissier, O., Carrascosa, C., Fornara, N., McBurney, P., Noriega, P., Ricci, A., Sabater-Mir, J., et al.: The role of the environment in agreement technologies. AI Rev. 39(1), 21–38 (2013)Barreteau, O., et al.: Our companion modelling approach. J. Artif. Soc. Soc. Simul. 6(1), 1–6 (2003)Boissier, O., Bordini, R.H., Hübner, J.F., Ricci, A., Santi, A.: Multi-agent oriented programming with jacamo. Sci. Comput. Program. 78(6), 747–761 (2013)Burdea, G., Coiffet, P.: Virtual Reality Technology. Wiley, New York (2003)Castelfranchi, C., Pezzullo, G., Tummolini, L.: Behavioral implicit communication (BIC): communicating with smart environments via our practical behavior and its traces. Int. J. Ambient Comput. Intell. 2(1), 1–12 (2010)Castelfranchi, C., Piunti, M., Ricci, A., Tummolini, L.: AMI systems as agent-based mirror worlds: bridging humans and agents through stigmergy. In: Bosse, T. (ed.) Agents and Ambient Intelligence, Ambient Intelligence and Smart Environments, pp. 17–31. IOS Press, Amsterdam (2012)Ferber, J.: Multi-Agent Systems: An Introduction to Distributed Artificial Intelligence. Addison Wesley Longman, Harlow (1999)Gelernter, D.: Mirror Worlds - or the Day Software Puts the Universe in a Shoebox: How it Will Happen and What it Will Mean. Oxford University Press, New York (1992)Gibson, W.: Neuromancer. Ace, New York (1984)Klügl, F., Fehler, M., Herrler, R.: About the role of the environment in multi-agent simulations. In: Weyns, D., Van Parunak, H.D., Michel, F. (eds.) E4MAS 2004. LNCS (LNAI), vol. 3374, pp. 127–149. Springer, Heidelberg (2005)Krueger, M.: Artificial Reality II. Addison-Wesley, New York (1991)Luck, M., Aylett, R.: Applying artificial intelligence to virtual reality: intelligent virtual environments. Appl. Artif. Intell. 14(1), 3–32 (2000)Dorigo, M., Floreano, D., Gambardella, L.M., et al.: Swarmanoid: a novel concept for the study of heterogeneous robotic swarms. IEEE Robot. Autom. Mag. 20(4), 60–71 (2013)Milgram, P., Kishino, A.F.: Taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. E77–D(12), 1321–1329 (1994)Olsson, T., Salo, M.: Online user survey on current mobile augmented reality applications. In: Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011, pp. 75–84. IEEE Computer Society, Washington, DC, USA (2011)Saunier, J., Balbo, F., Pinson, S.: A formal model of communication and context awareness in multiagent systems. J. Logic Lang. Inform. 23(2), 219–247 (2014)Stephenson, N.: Snow Crash. Bantam Books, New York (1992)Tummolini, L., Castelfranchi, C.: Trace signals: the meanings of stigmergy. In: Weyns, D., Van Parunak, H.D., Michel, F. (eds.) E4MAS 2006. LNCS (LNAI), vol. 4389, pp. 141–156. Springer, Heidelberg (2007)Weyns, D., Omicini, A., Odell, J.: Environment as a first class abstraction in multiagent systems. Auton. Agent. Multi-Agent Syst. 14(1), 5–30 (2007)Weyns, D., Schelfthout, K., Holvoet, T., Lefever, T.: Decentralized control of e’gv transportation systems. In: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 67–74. ACM (2005)Weyns, D., Schumacher, M., Ricci, A., Viroli, M., Holvoet, T.: Environments in multiagent systems. Knowl. Eng. Rev. 20(2), 127–141 (2005
Structured floral arrangement programme for improving visuospatial working memory in schizophrenia
Several cognitive therapies have been developed for patients with schizophrenia. However, little is known about the outcomes of these therapies in terms of non-verbal/visuospatial working memory, even though this may affect patients’ social outcomes. In the present pilot study, we investigated the effect of a structured floral arrangement (SFA) programme, where participants were required to create symmetrical floral arrangements. In this programme, the arrangement pattern and the order of placing each of the natural materials was predetermined. Participants have to identify where to place each material, and memorise the position temporarily to complete the floral arrangement. The schizophrenic patients who participated in this programme showed significant improvement in their scores for a block-tapping task backward version; whereas, non-treated control patients did not show such an improvement. The present results suggest that the SFA programme may positively stimulate visuospatial working memory in patients
- …