32 research outputs found

    Parallax Motion Effect Generation Through Instance Segmentation And Depth Estimation

    Full text link
    Stereo vision is a growing topic in computer vision due to the innumerable opportunities and applications this technology offers for the development of modern solutions, such as virtual and augmented reality applications. To enhance the user's experience in three-dimensional virtual environments, the motion parallax estimation is a promising technique to achieve this objective. In this paper, we propose an algorithm for generating parallax motion effects from a single image, taking advantage of state-of-the-art instance segmentation and depth estimation approaches. This work also presents a comparison against such algorithms to investigate the trade-off between efficiency and quality of the parallax motion effects, taking into consideration a multi-task learning network capable of estimating instance segmentation and depth estimation at once. Experimental results and visual quality assessment indicate that the PyD-Net network (depth estimation) combined with Mask R-CNN or FBNet networks (instance segmentation) can produce parallax motion effects with good visual quality.Comment: 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirate

    The effects of belongingness on the Simultaneous Lightness Contrast: A virtual reality study

    Get PDF
    Simultaneous Lightness Contrast (SLC) is the phenomenon whereby a grey patch on a dark background appears lighter than an equal patch on a light background. Interestingly, the lightness difference between these patches undergoes substantial augmentation when the two backgrounds are patterned, thereby forming the articulated-SLC display. There are two main interpretations of these phenomena: The midlevel interpretation maintains that the visual system groups the luminance within a set of contiguous frameworks, whilst the high-level one claims that the visual system splits the luminance into separate overlapping layers corresponding to separate physical contributions. This research aimed to test these two interpretations by systematically manipulating the viewing distance and the horizontal distance between the backgrounds of both the articulated and plain SLC displays. An immersive 3D Virtual Reality system was employed to reproduce identical alignment and distances, as well as isolating participants from interfering luminance. Results showed that reducing the viewing distance resulted in increased contrast in both the plain- and articulated-SLC displays and that, increasing the horizontal distance between the backgrounds resulted in decreased contrast in the articulated condition but increased contrast in the plain condition. These results suggest that a comprehensive lightness theory should combine the two interpretations

    Construção de ambientes educacionais com realidade aumentada: processo centrado no usuário

    Get PDF
    Nos últimos anos, uma proliferação de novas tecnologias vem surgindo para melhorar/facilitar o ensino e a assimilação das informações. Uma dessas tecnologias é a Realidade Aumentada - RA que enriquece o ambiente físico (real) com objetos virtuais. No entanto, é notavelmente maior a complexidade em projetá-los e usá-los do que interfaces/softwares 2D, onde o usuário, na maioria das vezes, já está habituado com as tecnologias. Projetistas de ambientes educacionais com RA precisam de diretrizes para tornar esses sistemas mais usáveis e diminuir assim a carga cognitiva, pois as orientações convencionais para construção de softwares, somente estão disponíveis para ambientes 2D. Dessa forma, este trabalho tem o objetivo de apresentar uma investigação que resultou em uma metodologia de Análise de Requisitos, específica para construção de objetos educacionais com RA, que visa estudar o modo como o usuário se comporta com as novas tecnologias usadas em RA (câmeras, capacetes, marcadores) e a junção do real com o virtual, construindo uma metodologia totalmente centrada no usuário

    Les retours tactile et kinesthésique améliorent la perception de distance en réalité virtuelle

    Get PDF
    National audienceResearch spanning psychology, neuroscience and HCI found that depth perception distortion is a common problem in virtual reality. This distortion results in depth compression, where users perceive objects closer than their intended distance. Studies suggested that cues, such as audio and haptic, help to solve this issue. We focus on haptic feedback and investigate how force feedback compares to tactile feedback within peripersonal space in reducing depth perception distortion. Our study (N=12) compares the use of haptic force feedback, vibration haptic feedback, a combination of both or no feedback. Our results show that both vibration and force feedback improve depth perception distortion over no feedback (8.3 times better distance estimation than with no haptic feedback vs. 1.4 to 1.5 times better with either vibration or force feedback on their own). Participants also subjectively preferred using force feedback, or a combination of force and vibration feedback, over no feedback.Des recherches en psychologie, neurosciences et IHM ont montré que la distorsion de la perception des distances est un problème courant en réalité virtuelle. Cette distorsion entraîne une compression des profondeurs, et les utilisateurs perçoivent des objets plus proches qu'ils ne le sont. Dans ce papier, nous nous concentrons sur le retour haptique et examinons comment le retour de force se compare au retour tactile pour réduire la compression des profondeurs. Notre étude (N = 12) compare l'utilisation du retour de force, le retour tactile vibratoire, la combinaison des deux ou l'absence de retour. Nos résultats montrent que le retour tactile et le retour de force améliorent la perception de la profondeur. L'estimation de distance est 8.3 fois meilleure que sans retour, par rapport à 1.4-1.5 fois avec retour tactile vibratoire ou de force non-combinés. Les participants ont également préféré utiliser le retour de force, ou une combinaison de force et tactile

    The Effects Of Differing Optical Stimuli On Depth Perception In Virtual Reality

    Get PDF
    It is well documented that egocentric depth perception is underestimated in virtual reality more often than not. Many studies have been done to try and understand why this underestimation happens and what variables affect it. While this underestimation can be shown consistently the degree of underestimation can strongly differ from study to study, with as much as 68% to as low as 6% underestimation, Jones et al. (2011, 2008); Knapp(1999); Richardson and Waller (2007). Many of these same studies use blind walking as a tool to measure depth perception. With no standardized blind walking method for virtual reality existing differing blind walking methods may cause differing results. This thesis will explore how small changes in the blind walking procedure affect depth perception. Specifically, we will be examining procedures that alter the amount of ambient light that is visible to an observer after performing a blind walk

    A comparative study using an autostereoscopic display with augmented and virtual reality

    Full text link
    Advances in display devices are facilitating the integration of stereoscopic visualization in our daily lives. However, autostereoscopic visualization has not been extensively exploited. In this paper, we present a system that combines Augmented Reality (AR) and autostereoscopic visualization. We also present the first study that compares different aspects using an autostereoscopic display with AR and VR, in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualization are possibilities for developing edutainment systems for childrenThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02). We would like to thank the following for their contributions: AIJU, the "Escola d'Estiu" and especially Ignacio Segui, Juan Cano, Miguelon Gimenez, and Javier Irimia. This work would not have been possible without their collaboration. The ALF3D project (TIN2009-14103-03) for the autostereoscopic display. Roberto Vivo, Rafa Gaitan, Severino Gonzalez, and M. Jose Vicent, for their help. The children's parents who signed the agreement to allow their children to participate in the study. The children who participated in the study. The ETSInf for letting us use its facilities during the testing phase.Arino, J.; Juan Lizandra, MC.; Gil Gómez, JA.; Mollá Vayá, RP. (2014). A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour and Information Technology. 33(6):646-655. https://doi.org/10.1080/0144929X.2013.815277S646655336Azuma, R. T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, 6(4), 355-385. doi:10.1162/pres.1997.6.4.355Blum, T.et al. 2012. Mirracle: augmented reality in-situ visualization of human anatomy using a magic mirror.In: IEEE virtual reality workshops, 4–8 March 2012, Costa Mesa, CA, USA. Washington, DC: IEEE Computer Society, 169–170.Botden, S. M. B. I., Buzink, S. N., Schijven, M. P., & Jakimowicz, J. J. (2007). Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference? World Journal of Surgery, 31(4), 764-772. doi:10.1007/s00268-006-0724-yChittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training: Motivations, issues, opportunities. Computers & Education, 49(1), 3-18. doi:10.1016/j.compedu.2005.06.002Dodgson, N. A. (2005). Autostereoscopic 3D displays. Computer, 38(8), 31-36. doi:10.1109/mc.2005.252Ehara, J., & Saito, H. (2006). Texture overlay for virtual clothing based on PCA of silhouettes. 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2006.297805Eisert, P., Fechteler, P., & Rurainsky, J. (2008). 3-D Tracking of shoes for Virtual Mirror applications. 2008 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2008.4587566Fiala, M. (2007). Magic Mirror System with Hand-held and Wearable Augmentations. 2007 IEEE Virtual Reality Conference. doi:10.1109/vr.2007.352493Froner, B., Holliman, N. S., & Liversedge, S. P. (2008). A comparative study of fine depth perception on two-view 3D displays. Displays, 29(5), 440-450. doi:10.1016/j.displa.2008.03.001Holliman, N. S., Dodgson, N. A., Favalora, G. E., & Pockett, L. (2011). Three-Dimensional Displays: A Review and Applications Analysis. IEEE Transactions on Broadcasting, 57(2), 362-371. doi:10.1109/tbc.2011.2130930Ilgner, J. F. R., Kawai, T., Shibata, T., Yamazoe, T., & Westhofen, M. (2006). Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Stereoscopic Displays and Virtual Reality Systems XIII. doi:10.1117/12.647591Jeong, J.-S., Park, C., Kim, M., Oh, W.-K., & Yoo, K.-H. (2011). Development of a 3D Virtual Laboratory with Motion Sensor for Physics Education. Ubiquitous Computing and Multimedia Applications, 253-262. doi:10.1007/978-3-642-20975-8_28Jones, J. A., Swan, J. E., Singh, G., Kolstad, E., & Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. Proceedings of the 5th symposium on Applied perception in graphics and visualization - APGV ’08. doi:10.1145/1394281.1394283Juan, M. C., & Pérez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Computers & Graphics, 34(6), 756-766. doi:10.1016/j.cag.2010.08.001Kaufmann, H., & Csisinko, M. (2011). Wireless Displays in Educational Augmented Reality Applications. Handbook of Augmented Reality, 157-175. doi:10.1007/978-1-4614-0064-6_6Kaufmann, H., & Meyer, B. (2008). Simulating educational physical experiments in augmented reality. ACM SIGGRAPH ASIA 2008 educators programme on - SIGGRAPH Asia ’08. doi:10.1145/1507713.1507717Konrad, J. (2011). 3D Displays. Optical and Digital Image Processing, 369-395. doi:10.1002/9783527635245.ch17Konrad, J., & Halle, M. (2007). 3-D Displays and Signal Processing. IEEE Signal Processing Magazine, 24(6), 97-111. doi:10.1109/msp.2007.905706Kwon, H., & Choi, H.-J. (2012). A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel. Stereoscopic Displays and Applications XXIII. doi:10.1117/12.907793Livingston, M. A., Zanbaka, C., Swan, J. E., & Smallman, H. S. (s. f.). Objective measures for the effectiveness of augmented reality. IEEE Proceedings. VR 2005. Virtual Reality, 2005. doi:10.1109/vr.2005.1492798Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers & Education, 50(4), 1339-1353. doi:10.1016/j.compedu.2006.12.008Montgomery, D. J., Woodgate, G. J., Jacobs, A. M. S., Harrold, J., & Ezra, D. (2001). Performance of a flat-panel display system convertible between 2D and autostereoscopic 3D modes. Stereoscopic Displays and Virtual Reality Systems VIII. doi:10.1117/12.430813Morphew, M. E., Shively, J. R., & Casey, D. (2004). Helmet-mounted displays for unmanned aerial vehicle control. Helmet- and Head-Mounted Displays IX: Technologies and Applications. doi:10.1117/12.541031Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20-28. doi:10.1016/j.cag.2005.10.004Petkov, E. G. (2010). Educational Virtual Reality through a Multiview Autostereoscopic 3D Display. Innovations in Computing Sciences and Software Engineering, 505-508. doi:10.1007/978-90-481-9112-3_86Shen, Y., Ong, S. K., & Nee, A. Y. C. (2011). Vision-Based Hand Interaction in Augmented Reality Environment. International Journal of Human-Computer Interaction, 27(6), 523-544. doi:10.1080/10447318.2011.555297Swan, J. E., Jones, A., Kolstad, E., Livingston, M. A., & Smallman, H. S. (2007). Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics, 13(3), 429-442. doi:10.1109/tvcg.2007.1035Urey, H., Chellappan, K. V., Erden, E., & Surman, P. (2011). State of the Art in Stereoscopic and Autostereoscopic Displays. Proceedings of the IEEE, 99(4), 540-555. doi:10.1109/jproc.2010.2098351Zhang, Y., Ji, Q., and Zhang, W., 2010. Multi-view autostereoscopic 3D display.In: International conference on optics photonics and energy engineering, 10–11 May 2010, Wuhan, China. Washington, DC: IEEE Computer Society, 58–61

    Recalibration of Perceived Distance in Virtual Environments Occurs Rapidly and Transfers Asymmetrically Across Scale

    Get PDF
    Distance in immersive virtual reality is commonly underperceived relative to intended distance, causing virtual environments to appear smaller than they actually are. However, a brief period of interaction by walking through the virtual environment with visual feedback can cause dramatic improvement in perceived distance. The goal of the current project was to determine how quickly improvement occurs as a result of walking interaction (Experiment 1) and whether improvement is specific to the distances experienced during interaction, or whether improvement transfers across scales of space (Experiment 2). The results show that five interaction trials resulted in a large improvement in perceived distance, and that subsequent walking interactions showed continued but diminished improvement. Furthermore, interaction with near objects (1-2 m) improved distance perception for near but not far (4-5 m) objects, whereas interaction with far objects broadly improved distance perception for both near and far objects. These results have practical implications for ameliorating distance underperception in immersive virtual reality, as well as theoretical implications for distinguishing between theories of how walking interaction influences perceived distance

    The Plausibility of a String Quartet Performance in Virtual Reality

    Get PDF
    We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. ‘Plausibility’ refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant’s movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility

    CaveUDK: a VR game engine middleware

    Get PDF
    Previous attempts at developing immersive versions of game engines have faced difficulties in achieving both overall high performance and preserving reusability of software developments. In this paper, we present a high-level VR middleware based on one of the most successful commercial game engines: the Unreal® Engine 3.0 (UE3). We describe a VR framework implemented as an extension to the Unreal® Development Kit (UDK) supporting CAVE"-like installations. Our approach relies on a distributed architecture reinforced by specific replication patterns to synchronize the user's point of view and interactions within a multi-screen installation. Our performance benchmarks indicated that our immersive port does not affect the game engine performance, even with complex real-time applications, such as fast-paced multiplayer First Person Shooter (FPS) games or high-resolution graphical environments with 2M+ polygons. A user study also demonstrated the capacity of our VR middleware to elicit high spatial presence while maintaining low cybersickness effects. With free distribution, we believe such a platform can support future Entertainment and VR research

    Drumming in immersive virtual reality: the body shapes the way we play

    Get PDF
    It has been shown that it is possible to generate perceptual illusions of ownership in immersive virtual reality (IVR) over a virtual body seen from first person perspective, in other words over a body that visually substitutes the person's real body. This can occur even when the virtual body is quite different in appearance from the person's real body. However, investigation of the psychological, behavioral and attitudinal consequences of such body transformations remains an interesting problem with much to be discovered. Thirty six Caucasian people participated in a between-groups experiment where they played a West-African Djembe hand drum while immersed in IVR and with a virtual body that substituted their own. The virtual hand drum was registered with a physical drum. They were alongside a virtual character that played a drum in a supporting, accompanying role. In a baseline condition participants were represented only by plainly shaded white hands, so that they were able merely to play. In the experimental condition they were represented either by a casually dressed dark-skinned virtual body (Casual Dark-Skinned - CD) or by a formal suited light-skinned body (Formal Light-Skinned - FL). Although participants of both groups experienced a strong body ownership illusion towards the virtual body, only those with the CD representation showed significant increases in their movement patterns for drumming compared to the baseline condition and compared with those embodied in the FL body. Moreover, the stronger the illusion of body ownership in the CD condition, the greater this behavioral change. A path analysis showed that the observed behavioral changes were a function of the strength of the illusion of body ownership towards the virtual body and its perceived appropriateness for the drumming task. These results demonstrate that full body ownership illusions can lead to substantial behavioral and possibly cognitive changes depending on the appearance of the virtual body. This could be important for many applications such as learning, education, training, psychotherapy and rehabilitation using IVR
    corecore