1,423 research outputs found

    Exploring individual user differences in the 2D/3D interaction with medical image data

    Get PDF
    User-centered design is often performed without regard to individual user differences. In this paper, we report results of an empirical study aimed to evaluate whether computer experience and demographic user characteristics would have an effect on the way people interact with the visualized medical data in a 3D virtual environment using 2D and 3D input devices. We analyzed the interaction through performance data, questionnaires and observations. The results suggest that differences in gender, age and game experience have an effect on people’s behavior and task performance, as well as on subjective\ud user preferences

    Brain Analysis While Playing 2D and 3D Video Games of Nintendo 3DS Using Electroencephalogram (EEG)

    Get PDF
    To be able to gain knowledge of human brain and study tbe perception of human towards stimulated events, emotions and sense, scientists have been using few main methods. They are Electroencephalograph (EEG), Computerized Axial Tomography (CAT) scans, Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI) and Magnetoencephalograph (MEG). These technologies, up to this date are able to help scientists, researchers and doctors to understand how brain works and doing analysis upon tbem. [1]. Meanwhile tbis prqject will be focusing on the usage of EEG to do tbe analysis on human brain. The EEG shows electrical impulses of tbe brain and can be recorded in form of waves. Recently, tbe emerging of auto stereoscopic 3D technology of Nintendo 3DS has bring new gaming experience as players can see 3D. The objective of this project is to use EEG equipment to analyse tbe activity of human brain when playing console game Nintendo 3DS in 2 dimensions (2D) mode and 3 dimensions (3D) mode. The purpose of this project is also to study and compare on human brain perception of 2D and 3D gaming. Our brain perceives 2D and 3D moving images of video games differently, and we would want to study how different tbey are. In tbe end, tbis project will be able to explain and conclude how human brain responds to 2D and 3D gaming of Nintendo 3DS console game and what difference tbey make in human visual system of brain

    Analyzing autostereoscopic environment confgurations for the design of videogames

    Get PDF
    Stereoscopic devices are becoming more popular every day. The 3D visualization that these displays ofer is being used by videogame designers to enhance the user’s game experience. Autostereoscopic monitors ofer the possibility of obtaining this 3D visualization without the need for extra device. This fact makes them more attractive to videogame developers. However, the confguration of the cameras that make it possible to obtain an immersive 3D visualization inside the game is still an open problem. In this paper, some system confgurations that create autostereoscopic visualization in a 3D game engine were evaluated to obtain a good accommodation of the user experience with the game. To achieve this, user tests that take into account the movement of the player were carried out to evaluate diferent camera confgurations, namely, dynamic and static converging optical axis and parallel optical axis. The purpose of these tests is to evaluate the user experience regarding visual discomfort resulting from the movement of the objects, with the purpose of assessing the preference for one confguration or the other. The results show that the users tend to have a preference trend for the parallel optical axis confguration set. This confguration seems to be optimal because the area where the moving objects are focused is deeper than in the other confgurations

    Development of Immersive and Interactive Virtual Reality Environment for Two-Player Table Tennis

    Get PDF
    Although the history of Virtual Reality (VR) is only about half a century old, all kinds of technologies in the VR field are developing rapidly. VR is a computer generated simulation that replaces or augments the real world by various media. In a VR environment, participants have a perception of “presence”, which can be described by the sense of immersion and intuitive interaction. One of the major VR applications is in the field of sports, in which a life-like sports environment is simulated, and the body actions of players can be tracked and represented by using VR tracking and visualisation technology. In the entertainment field, exergaming that merges video game with physical exercise activities by employing tracking or even 3D display technology can be considered as a small scale VR. For the research presented in this thesis, a novel realistic real-time table tennis game combining immersive, interactive and competitive features is developed. The implemented system integrates the InterSense tracking system, SwissRanger 3D camera and a three-wall rear projection stereoscopic screen. The Intersense tracking system is based on ultrasonic and inertia sensing techniques which provide fast and accurate 6-DOF (i.e. six degrees of freedom) tracking information of four trackers. Two trackers are placed on the two players’ heads to provide the players’ viewing positions. The other two trackers are held by players as the racquets. The SwissRanger 3D camera is mounted on top of the screen to capture the player’

    A comparative study using an autostereoscopic display with augmented and virtual reality

    Full text link
    Advances in display devices are facilitating the integration of stereoscopic visualization in our daily lives. However, autostereoscopic visualization has not been extensively exploited. In this paper, we present a system that combines Augmented Reality (AR) and autostereoscopic visualization. We also present the first study that compares different aspects using an autostereoscopic display with AR and VR, in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualization are possibilities for developing edutainment systems for childrenThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02). We would like to thank the following for their contributions: AIJU, the "Escola d'Estiu" and especially Ignacio Segui, Juan Cano, Miguelon Gimenez, and Javier Irimia. This work would not have been possible without their collaboration. The ALF3D project (TIN2009-14103-03) for the autostereoscopic display. Roberto Vivo, Rafa Gaitan, Severino Gonzalez, and M. Jose Vicent, for their help. The children's parents who signed the agreement to allow their children to participate in the study. The children who participated in the study. The ETSInf for letting us use its facilities during the testing phase.Arino, J.; Juan Lizandra, MC.; Gil Gómez, JA.; Mollá Vayá, RP. (2014). A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour and Information Technology. 33(6):646-655. https://doi.org/10.1080/0144929X.2013.815277S646655336Azuma, R. T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, 6(4), 355-385. doi:10.1162/pres.1997.6.4.355Blum, T.et al. 2012. Mirracle: augmented reality in-situ visualization of human anatomy using a magic mirror.In: IEEE virtual reality workshops, 4–8 March 2012, Costa Mesa, CA, USA. Washington, DC: IEEE Computer Society, 169–170.Botden, S. M. B. I., Buzink, S. N., Schijven, M. P., & Jakimowicz, J. J. (2007). Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference? World Journal of Surgery, 31(4), 764-772. doi:10.1007/s00268-006-0724-yChittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training: Motivations, issues, opportunities. Computers & Education, 49(1), 3-18. doi:10.1016/j.compedu.2005.06.002Dodgson, N. A. (2005). Autostereoscopic 3D displays. Computer, 38(8), 31-36. doi:10.1109/mc.2005.252Ehara, J., & Saito, H. (2006). Texture overlay for virtual clothing based on PCA of silhouettes. 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2006.297805Eisert, P., Fechteler, P., & Rurainsky, J. (2008). 3-D Tracking of shoes for Virtual Mirror applications. 2008 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2008.4587566Fiala, M. (2007). Magic Mirror System with Hand-held and Wearable Augmentations. 2007 IEEE Virtual Reality Conference. doi:10.1109/vr.2007.352493Froner, B., Holliman, N. S., & Liversedge, S. P. (2008). A comparative study of fine depth perception on two-view 3D displays. Displays, 29(5), 440-450. doi:10.1016/j.displa.2008.03.001Holliman, N. S., Dodgson, N. A., Favalora, G. E., & Pockett, L. (2011). Three-Dimensional Displays: A Review and Applications Analysis. IEEE Transactions on Broadcasting, 57(2), 362-371. doi:10.1109/tbc.2011.2130930Ilgner, J. F. R., Kawai, T., Shibata, T., Yamazoe, T., & Westhofen, M. (2006). Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Stereoscopic Displays and Virtual Reality Systems XIII. doi:10.1117/12.647591Jeong, J.-S., Park, C., Kim, M., Oh, W.-K., & Yoo, K.-H. (2011). Development of a 3D Virtual Laboratory with Motion Sensor for Physics Education. Ubiquitous Computing and Multimedia Applications, 253-262. doi:10.1007/978-3-642-20975-8_28Jones, J. A., Swan, J. E., Singh, G., Kolstad, E., & Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. Proceedings of the 5th symposium on Applied perception in graphics and visualization - APGV ’08. doi:10.1145/1394281.1394283Juan, M. C., & Pérez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Computers & Graphics, 34(6), 756-766. doi:10.1016/j.cag.2010.08.001Kaufmann, H., & Csisinko, M. (2011). Wireless Displays in Educational Augmented Reality Applications. Handbook of Augmented Reality, 157-175. doi:10.1007/978-1-4614-0064-6_6Kaufmann, H., & Meyer, B. (2008). Simulating educational physical experiments in augmented reality. ACM SIGGRAPH ASIA 2008 educators programme on - SIGGRAPH Asia ’08. doi:10.1145/1507713.1507717Konrad, J. (2011). 3D Displays. Optical and Digital Image Processing, 369-395. doi:10.1002/9783527635245.ch17Konrad, J., & Halle, M. (2007). 3-D Displays and Signal Processing. IEEE Signal Processing Magazine, 24(6), 97-111. doi:10.1109/msp.2007.905706Kwon, H., & Choi, H.-J. (2012). A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel. Stereoscopic Displays and Applications XXIII. doi:10.1117/12.907793Livingston, M. A., Zanbaka, C., Swan, J. E., & Smallman, H. S. (s. f.). Objective measures for the effectiveness of augmented reality. IEEE Proceedings. VR 2005. Virtual Reality, 2005. doi:10.1109/vr.2005.1492798Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers & Education, 50(4), 1339-1353. doi:10.1016/j.compedu.2006.12.008Montgomery, D. J., Woodgate, G. J., Jacobs, A. M. S., Harrold, J., & Ezra, D. (2001). Performance of a flat-panel display system convertible between 2D and autostereoscopic 3D modes. Stereoscopic Displays and Virtual Reality Systems VIII. doi:10.1117/12.430813Morphew, M. E., Shively, J. R., & Casey, D. (2004). Helmet-mounted displays for unmanned aerial vehicle control. Helmet- and Head-Mounted Displays IX: Technologies and Applications. doi:10.1117/12.541031Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20-28. doi:10.1016/j.cag.2005.10.004Petkov, E. G. (2010). Educational Virtual Reality through a Multiview Autostereoscopic 3D Display. Innovations in Computing Sciences and Software Engineering, 505-508. doi:10.1007/978-90-481-9112-3_86Shen, Y., Ong, S. K., & Nee, A. Y. C. (2011). Vision-Based Hand Interaction in Augmented Reality Environment. International Journal of Human-Computer Interaction, 27(6), 523-544. doi:10.1080/10447318.2011.555297Swan, J. E., Jones, A., Kolstad, E., Livingston, M. A., & Smallman, H. S. (2007). Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics, 13(3), 429-442. doi:10.1109/tvcg.2007.1035Urey, H., Chellappan, K. V., Erden, E., & Surman, P. (2011). State of the Art in Stereoscopic and Autostereoscopic Displays. Proceedings of the IEEE, 99(4), 540-555. doi:10.1109/jproc.2010.2098351Zhang, Y., Ji, Q., and Zhang, W., 2010. Multi-view autostereoscopic 3D display.In: International conference on optics photonics and energy engineering, 10–11 May 2010, Wuhan, China. Washington, DC: IEEE Computer Society, 58–61

    PhysioVR: a novel mobile virtual reality framework for physiological computing

    Get PDF
    Virtual Reality (VR) is morphing into a ubiquitous technology by leveraging of smartphones and screenless cases in order to provide highly immersive experiences at a low price point. The result of this shift in paradigm is now known as mobile VR (mVR). Although mVR offers numerous advantages over conventional immersive VR methods, one of the biggest limitations is related with the interaction pathways available for the mVR experiences. Using physiological computing principles, we created the PhysioVR framework, an Open-Source software tool developed to facilitate the integration of physiological signals measured through wearable devices in mVR applications. PhysioVR includes heart rate (HR) signals from Android wearables, electroencephalography (EEG) signals from a low cost brain computer interface and electromyography (EMG) signals from a wireless armband. The physiological sensors are connected with a smartphone via Bluetooth and the PhysioVR facilitates the streaming of the data using UDP communication protocol, thus allowing a multicast transmission for a third party application such as the Unity3D game engine. Furthermore, the framework provides a bidirectional communication with the VR content allowing an external event triggering using a real-time control as well as data recording options. We developed a demo game project called EmoCat Rescue which encourage players to modulate HR levels in order to successfully complete the in-game mission. EmoCat Rescue is included in the PhysioVR project which can be freely downloaded. This framework simplifies the acquisition, streaming and recording of multiple physiological signals and parameters from wearable consumer devices providing a single and efficient interface to create novel physiologically-responsive mVR applications.info:eu-repo/semantics/publishedVersio

    Stereoscopic Depth Perception Through Foliage

    Full text link
    Both humans and computational methods struggle to discriminate the depths of objects hidden beneath foliage. However, such discrimination becomes feasible when we combine computational optical synthetic aperture sensing with the human ability to fuse stereoscopic images. For object identification tasks, as required in search and rescue, wildlife observation, surveillance, and early wildfire detection, depth assists in differentiating true from false findings, such as people, animals, or vehicles vs. sun-heated patches at the ground level or in the tree crowns, or ground fires vs. tree trunks. We used video captured by a drone above dense woodland to test users' ability to discriminate depth. We found that this is impossible when viewing monoscopic video and relying on motion parallax. The same was true with stereoscopic video because of the occlusions caused by foliage. However, when synthetic aperture sensing was used to reduce occlusions and disparity-scaled stereoscopic video was presented, whereas computational (stereoscopic matching) methods were unsuccessful, human observers successfully discriminated depth. This shows the potential of systems which exploit the synergy between computational methods and human vision to perform tasks that neither can perform alone

    Interactive natural user interfaces

    Get PDF
    For many years, science fiction entertainment has showcased holographic technology and futuristic user interfaces that have stimulated the world\u27s imagination. Movies such as Star Wars and Minority Report portray characters interacting with free-floating 3D displays and manipulating virtual objects as though they were tangible. While these futuristic concepts are intriguing, it\u27s difficult to locate a commercial, interactive holographic video solution in an everyday electronics store. As used in this work, it should be noted that the term holography refers to artificially created, free-floating objects whereas the traditional term refers to the recording and reconstruction of 3D image data from 2D mediums. This research addresses the need for a feasible technological solution that allows users to work with projected, interactive and touch-sensitive 3D virtual environments. This research will aim to construct an interactive holographic user interface system by consolidating existing commodity hardware and interaction algorithms. In addition, this work studies the best design practices for human-centric factors related to 3D user interfaces. The problem of 3D user interfaces has been well-researched. When portrayed in science fiction, futuristic user interfaces usually consist of a holographic display, interaction controls and feedback mechanisms. In reality, holographic displays are usually represented by volumetric or multi-parallax technology. In this work, a novel holographic display is presented which leverages a mini-projector to produce a free-floating image onto a fog-like surface. The holographic user interface system will consist of a display component: to project a free-floating image; a tracking component: to allow the user to interact with the 3D display via gestures; and a software component: which drives the complete hardware system. After examining this research, readers will be well-informed on how to build an intuitive, eye-catching holographic user interface system for various application arenas

    GPS-MIV: The General Purpose System for Multi-display Interactive Visualization

    Get PDF
    The new age of information has created opportunities for inventions like the internet. These inventions allow us access to tremendous quantities of data. But, with the increase in information there is need to make sense of such vast quantities of information by manipulating that information to reveal hidden patterns to aid in making sense of it. Data visualization systems provide the tools to reveal patterns and filter information, aiding the processes of insight and decision making. The purpose of this thesis is to develop and test a data visualization system, The General Purpose System for Multi-display Interactive Visualization (GPS-MIV). GPS-MIV is a software system allowing the user to visualize data graphically and interact with it. At the core of the system is a graphics system that displays different computer generated scenes from multiple perspectives and with multiple views. Additionally, GSP-MIV provides interaction for the user to explore the scene

    Des preuves récentes sur les habiletés visuo- spatiales pour la formation en chirurgie : revue exploratoire

    Get PDF
    Background: Understanding the relationships between structures is critical for surgical trainees. However, the heterogeneity of the literature on visual-spatial ability (VSA) in surgery makes it challenging for educators to make informed decisions on incorporating VSA into their programs. We conducted a scoping review of the literature on VSA in surgery to provide a map of the literature and identify where gaps still exist for future research. Methods: We searched databases until December 2019 using keywords related to VSA and surgery. The resulting articles were independently screened by two researchers for inclusion in our review. Results: We included 117 articles in the final review. Fifty-nine articles reported significant correlations between VSA tests and surgical performance, and this association is supported by neuroimaging studies. However, it remains unclear whether VSA should be incorporated into trainee selection and whether there is a benefit of three-dimensional (3D) over two-dimensional (2D) training. Conclusions: It appears that VSA correlates with surgical performance in the simulated environment, particularly for novice learners. Based on our findings, we make suggestions for how surgical educators may use VSA to support novice learners. Further research should determine whether VSA remains correlated to surgical performance when trainees move into the operative environment.Contexte :  Il est fondamental pour les chirurgiens en formation de comprendre les liens qui unissent les diverses structures corporelles. Étant donné l’hétérogénéité de la littérature portant sur les habiletés visuo-spatiales (HVS) nécessaires en chirurgie, les éducateurs ont de la difficulté à prendre des décisions éclairées quant à l’enseignement des HVS dans leurs programmes. On a effectué une étude exploratoire de la littérature sur les HVS en chirurgie afin de répertorier la littérature et de cerner des lacunes pouvant faire l’objet de recherches ultérieures. Méthodologie : On a interrogé des bases de données jusqu’à décembre 2019 à l’aide de mots-clés reliés aux HVS et à la chirurgie.  Les articles trouvés ont été évalués de façon indépendante par deux chercheurs pour déterminer leur inclusion à la revue. Résultats :  Au total, 117 articles ont été inclus dans la revue finale. Cinquante-neuf faisaient état d’importantes corrélations entre les tests d’évaluation des HVS et la performance chirurgicale. Cette association est étayée par les résultats d’études en neuro-imagerie.   Il n’est pas clair cependant si les HVS devraient faire partie des critères de sélection des résidents et si une formation sur les techniques de visualisation en trois dimensions (3D) est préférable à une formation sur les techniques de visualisation en deux dimensions (2D). Conclusions : Il semble exister un lien entre les HVS et la performance chirurgicale en contexte de simulation, particulièrement chez les apprenants novices. À la lumière de nos résultats, nous présentons des recommandations sur la façon dont les formateurs en chirurgie pourraient se servir des HVS pour aider les apprenants novices. D’autres travaux de recherche devraient permettre de savoir si les HVS restent reliés à la performance chirurgicale lorsque les stagiaires passent à un environnement opératoire réel
    corecore