222 research outputs found

    Spatial updating in narratives.

    Get PDF
    Across two experiments we investigated spatial updating in environments encoded through narratives. In Experiment 1, in which participants were given visualization instructions to imagine the protagonist’s movement, they formed an initial representation during learning but did not update it during subsequent described movement. In Experiment 2, in which participants were instructed to physically move in space towards the directions of the described objects prior to testing, there was evidence for spatial updating. Overall, findings indicate that physical movement can cause participants to link a spatial representation of a remote environment to a sensorimotor framework and update the locations of remote objects while they move

    Haptography: Capturing and Recreating the Rich Feel of Real Surfaces

    Get PDF
    Haptic interfaces, which allow a user to touch virtual and remote environments through a hand-held tool, have opened up exciting new possibilities for applications such as computer-aided design and robot-assisted surgery. Unfortunately, the haptic renderings produced by these systems seldom feel like authentic re-creations of the richly varied surfaces one encounters in the real world. We have thus envisioned the new approach of haptography, or haptic photography, in which an individual quickly records a physical interaction with a real surface and then recreates that experience for a user at a different time and/or place. This paper presents an overview of the goals and methods of haptography, emphasizing the importance of accurately capturing and recreating the high frequency accelerations that occur during tool-mediated interactions. In the capturing domain, we introduce a new texture modeling and synthesis method based on linear prediction applied to acceleration signals recorded from real tool interactions. For recreating, we show a new haptography handle prototype that enables the user of a Phantom Omni to feel fine surface features and textures

    A comparative study using an autostereoscopic display with augmented and virtual reality

    Full text link
    Advances in display devices are facilitating the integration of stereoscopic visualization in our daily lives. However, autostereoscopic visualization has not been extensively exploited. In this paper, we present a system that combines Augmented Reality (AR) and autostereoscopic visualization. We also present the first study that compares different aspects using an autostereoscopic display with AR and VR, in which 39 children from 8 to 10 years old participated. In our study, no statistically significant differences were found between AR and VR. However, the scores were very high in nearly all of the questions, and the children also scored the AR version higher in all cases. Moreover, the children explicitly preferred the AR version (81%). For the AR version, a strong and significant correlation was found between the use of the autostereoscopic screen in games and seeing the virtual object on the marker. For the VR version, two strong and significant correlations were found. The first correlation was between the ease of play and the use of the rotatory controller. The second correlation was between depth perception and the game global score. Therefore, the combinations of AR and VR with autostereoscopic visualization are possibilities for developing edutainment systems for childrenThis work was funded by the Spanish APRENDRA project (TIN2009-14319-C02). We would like to thank the following for their contributions: AIJU, the "Escola d'Estiu" and especially Ignacio Segui, Juan Cano, Miguelon Gimenez, and Javier Irimia. This work would not have been possible without their collaboration. The ALF3D project (TIN2009-14103-03) for the autostereoscopic display. Roberto Vivo, Rafa Gaitan, Severino Gonzalez, and M. Jose Vicent, for their help. The children's parents who signed the agreement to allow their children to participate in the study. The children who participated in the study. The ETSInf for letting us use its facilities during the testing phase.Arino, J.; Juan Lizandra, MC.; Gil Gómez, JA.; Mollá Vayá, RP. (2014). A comparative study using an autostereoscopic display with augmented and virtual reality. Behaviour and Information Technology. 33(6):646-655. https://doi.org/10.1080/0144929X.2013.815277S646655336Azuma, R. T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, 6(4), 355-385. doi:10.1162/pres.1997.6.4.355Blum, T.et al. 2012. Mirracle: augmented reality in-situ visualization of human anatomy using a magic mirror.In: IEEE virtual reality workshops, 4–8 March 2012, Costa Mesa, CA, USA. Washington, DC: IEEE Computer Society, 169–170.Botden, S. M. B. I., Buzink, S. N., Schijven, M. P., & Jakimowicz, J. J. (2007). Augmented versus Virtual Reality Laparoscopic Simulation: What Is the Difference? World Journal of Surgery, 31(4), 764-772. doi:10.1007/s00268-006-0724-yChittaro, L., & Ranon, R. (2007). Web3D technologies in learning, education and training: Motivations, issues, opportunities. Computers & Education, 49(1), 3-18. doi:10.1016/j.compedu.2005.06.002Dodgson, N. A. (2005). Autostereoscopic 3D displays. Computer, 38(8), 31-36. doi:10.1109/mc.2005.252Ehara, J., & Saito, H. (2006). Texture overlay for virtual clothing based on PCA of silhouettes. 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality. doi:10.1109/ismar.2006.297805Eisert, P., Fechteler, P., & Rurainsky, J. (2008). 3-D Tracking of shoes for Virtual Mirror applications. 2008 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2008.4587566Fiala, M. (2007). Magic Mirror System with Hand-held and Wearable Augmentations. 2007 IEEE Virtual Reality Conference. doi:10.1109/vr.2007.352493Froner, B., Holliman, N. S., & Liversedge, S. P. (2008). A comparative study of fine depth perception on two-view 3D displays. Displays, 29(5), 440-450. doi:10.1016/j.displa.2008.03.001Holliman, N. S., Dodgson, N. A., Favalora, G. E., & Pockett, L. (2011). Three-Dimensional Displays: A Review and Applications Analysis. IEEE Transactions on Broadcasting, 57(2), 362-371. doi:10.1109/tbc.2011.2130930Ilgner, J. F. R., Kawai, T., Shibata, T., Yamazoe, T., & Westhofen, M. (2006). Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education. Stereoscopic Displays and Virtual Reality Systems XIII. doi:10.1117/12.647591Jeong, J.-S., Park, C., Kim, M., Oh, W.-K., & Yoo, K.-H. (2011). Development of a 3D Virtual Laboratory with Motion Sensor for Physics Education. Ubiquitous Computing and Multimedia Applications, 253-262. doi:10.1007/978-3-642-20975-8_28Jones, J. A., Swan, J. E., Singh, G., Kolstad, E., & Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. Proceedings of the 5th symposium on Applied perception in graphics and visualization - APGV ’08. doi:10.1145/1394281.1394283Juan, M. C., & Pérez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Computers & Graphics, 34(6), 756-766. doi:10.1016/j.cag.2010.08.001Kaufmann, H., & Csisinko, M. (2011). Wireless Displays in Educational Augmented Reality Applications. Handbook of Augmented Reality, 157-175. doi:10.1007/978-1-4614-0064-6_6Kaufmann, H., & Meyer, B. (2008). Simulating educational physical experiments in augmented reality. ACM SIGGRAPH ASIA 2008 educators programme on - SIGGRAPH Asia ’08. doi:10.1145/1507713.1507717Konrad, J. (2011). 3D Displays. Optical and Digital Image Processing, 369-395. doi:10.1002/9783527635245.ch17Konrad, J., & Halle, M. (2007). 3-D Displays and Signal Processing. IEEE Signal Processing Magazine, 24(6), 97-111. doi:10.1109/msp.2007.905706Kwon, H., & Choi, H.-J. (2012). A time-sequential mutli-view autostereoscopic display without resolution loss using a multi-directional backlight unit and an LCD panel. Stereoscopic Displays and Applications XXIII. doi:10.1117/12.907793Livingston, M. A., Zanbaka, C., Swan, J. E., & Smallman, H. S. (s. f.). Objective measures for the effectiveness of augmented reality. IEEE Proceedings. VR 2005. Virtual Reality, 2005. doi:10.1109/vr.2005.1492798Monahan, T., McArdle, G., & Bertolotto, M. (2008). Virtual reality for collaborative e-learning. Computers & Education, 50(4), 1339-1353. doi:10.1016/j.compedu.2006.12.008Montgomery, D. J., Woodgate, G. J., Jacobs, A. M. S., Harrold, J., & Ezra, D. (2001). Performance of a flat-panel display system convertible between 2D and autostereoscopic 3D modes. Stereoscopic Displays and Virtual Reality Systems VIII. doi:10.1117/12.430813Morphew, M. E., Shively, J. R., & Casey, D. (2004). Helmet-mounted displays for unmanned aerial vehicle control. Helmet- and Head-Mounted Displays IX: Technologies and Applications. doi:10.1117/12.541031Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for virtual learning environments. Computers & Graphics, 30(1), 20-28. doi:10.1016/j.cag.2005.10.004Petkov, E. G. (2010). Educational Virtual Reality through a Multiview Autostereoscopic 3D Display. Innovations in Computing Sciences and Software Engineering, 505-508. doi:10.1007/978-90-481-9112-3_86Shen, Y., Ong, S. K., & Nee, A. Y. C. (2011). Vision-Based Hand Interaction in Augmented Reality Environment. International Journal of Human-Computer Interaction, 27(6), 523-544. doi:10.1080/10447318.2011.555297Swan, J. E., Jones, A., Kolstad, E., Livingston, M. A., & Smallman, H. S. (2007). Egocentric depth judgments in optical, see-through augmented reality. IEEE Transactions on Visualization and Computer Graphics, 13(3), 429-442. doi:10.1109/tvcg.2007.1035Urey, H., Chellappan, K. V., Erden, E., & Surman, P. (2011). State of the Art in Stereoscopic and Autostereoscopic Displays. Proceedings of the IEEE, 99(4), 540-555. doi:10.1109/jproc.2010.2098351Zhang, Y., Ji, Q., and Zhang, W., 2010. Multi-view autostereoscopic 3D display.In: International conference on optics photonics and energy engineering, 10–11 May 2010, Wuhan, China. Washington, DC: IEEE Computer Society, 58–61

    A Comparison of Two Wearable Tactile Interfaces with a Complementary Display in Two Orientations

    Get PDF
    Research has shown that two popular forms of wearable tactile displays, a back array and a waist belt, can aid pedestrian navigation by indicating direction. Each type has its proponents and each has been reported as successful in experimental trials, however, no direct experimental comparisons of the two approaches have been reported. We have therefore conducted a series of experiments directly comparing them on a range of measures. In this paper, we present results from a study in which we used a directional line drawing task to compare user performance with these two popular forms of wearable tactile display. We also investigated whether user performance was affected by a match between the plane of the tactile interface and the plane in which the users drew the perceived directions. Finally, we investigated the effect of adding a complementary visual display. The touch screen display on which participants drew the perceived directions presented either a blank display or a visual display of a map indicating eight directions from a central roundabout, corresponding to the eight directions indicated by the tactile stimuli. We found that participants performed significantly faster and more accurately with the belt than with the array whether they had a vertical screen or a horizontal screen. We found no difference in performance with the map display compared to the blank display

    Projects of devotion : energy exploration and moral ambition in the cosmoeconomy of oil and gas in the Western United States

    Get PDF
    This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 715146. The authors also acknowledge the funding received to carry out this research from the Leverhulme Trust (ECF‐2013‐177) and the British Academy (EN150010).This article considers how people working in the oil and gas industry in Colorado perceive their involvement in energy exploration in relation to broader understandings of devotion, compassion, and outreach. I argue that although their energy projects may appear to merely echo companies’ formal promotional pitches, the oil field and corporate actors’ own moral ambitions reveal more-than-human cosmoeconomic visions of oil’s potentiality. This article thus demonstrates how multiple and diverging ethical registers intersect and inform the valuation of oil.Publisher PDFPeer reviewe

    Cross-ancestry genome-wide association analysis of corneal thickness strengthens link between complex and Mendelian eye diseases

    Get PDF
    Central corneal thickness (CCT) is a highly heritable trait associated with complex eye diseases such as keratoconus and glaucoma. We perform a genome-wide association meta-analysis of CCT and identify 19 novel regions. In addition to adding support for known connective tissue-related pathways, pathway analyses uncover previously unreported gene sets. Remarkably, >20% of the CCT-loci are near or within Mendelian disorder genes. These included FBN1, ADAMTS2 and TGFB2 which associate with connective tissue disorders (Marfan, Ehlers-Danlos and Loeys-Dietz syndromes), and the LUM-DCN-KERA gene complex involved in myopia, corneal dystrophies and cornea plana. Using index CCT-increasing variants, we find a significant inverse correlation in effect sizes between CCT and keratoconus (r =-0.62, P = 5.30 × 10-5) but not between CCT and primary open-angle glaucoma (r =-0.17, P = 0.2). Our findings provide evidence for shared genetic influences between CCT and keratoconus, and implicate candidate genes acting in collagen and extracellular matrix regulation
    corecore