5 research outputs found

    Experiencing Augmented Reality as an Accessibility Resource in the UNESCO Heritage Site called "La Lonja", Valencia

    Get PDF
    This paper presents the design of an augmented reality application for the Gothic Silk Market Building called the "Lonja de la Seda" in Valencia (UNESCO Monument Heritage Site, 1996) and the results of the experiments carried out "in-situ" to observe its usability as an accessibility resource. The objective of this project has been to use and validate augmented reality (AR) as a tool to increase the accessibility to the architectural elements of this monumental setting. The AR application aims to resolve the perception issues derived from poor lighting, the distance in relation to the multiple details, access, etc. 145 individuals of different ages and diverse origin, making the visit to the monument with or without guide used the AR application. When visitors had used the application as they wished, interviews were conducted individually in order to receive their feedback about the AR experience. Users enjoyed identifying and selecting the motifs they were going to visualize with the AR tool. In general, the experience was very positively evaluated by participants, promoting a favourable and renewed image of the monument.This project has been carried out in collaboration between the Department of Architectural Graphic Expresion and Labhuman Institute at the Universitat Politùcnica de Valùncia funded by the Research Program PAID0511 "Nuevas Aplicaciones de las Tecnologías Gráficas para la Mejora de la Sostenibilidad, el Conocimiento y la Accesibilidad al Patrimonio” (Ref. 2786). Besides, the Spanish Ministry Economy and Competitiveness partially supported this work (Project ref. TIN2010-21296-C02-01). It is also necessary to be grateful to the Town Hall of Valencia which has allowed us to carry out the experience in this protected building.Puyuelo Cazorla, M.; Higón Calvet, JL.; Merino Sanjuan, L.; Contero, M. (2013). Experiencing Augmented Reality as an Accessibility Resource in the UNESCO Heritage Site called "La Lonja", Valencia. Elsevier. 25:171-178. https://doi.org/10.1016/j.procs.2013.11.021S1711782

    A Survey of Augmented Reality

    Get PDF
    © 2015 M. Billinghurst, A. Clark, and G. Lee. This survey summarizes almost 50 years of research and development in the field of Augmented Reality (AR). From early research in the 1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched

    Feature regression for continuous pose estimation of object categories

    Get PDF
    [no abstract

    Scalable Real-time Planar Targets Tracking for Digilog Books

    Get PDF
    We propose a novel 3D tracking method that supports several hundreds of pre-trained potential planar targets without losing real-time performance. This goes well beyond the state-of-the-art, and to reach this level of performances, two threads run in parallel: The foreground thread tracks feature points from frame-toframe to ensure real-time performances, while a background thread aims at recognizing the visible targets and estimating their poses. The latter relies on a coarseto-fine approach: Assuming that one target is visible at a time, which is reasonable for digilog books applications, it first recognizes the visible target with an image retrieval algorithm, then matches feature points between the target and the input image to estimate the target pose. This background thread is more demanding than the foreground one, and is therefore several times slower. We therefore propose a simple but effective mechanism for the background thread to communicate its results to the foreground thread without lag. Our implementation runs at more than 125 frames per second, with 314 potential planar targets. Its applicability is demonstrated with an Augmented Reality book application

    HumanTop: a multi-object tracking tabletop

    Full text link
    In this paper, a computer vision based interactive multi-touch tabletop system called HumanTop is introduced. HumanTop implements a stereo camera vision subsystem which allows not only an accurate fingertip tracking algorithm but also a precise touch-over-the-working surface detection method. Based on a pair of visible spectra cameras, a novel synchronization circuit makes the camera caption and the image projection independent from each other, providing the minimum basis for the development of computer vision analysis based on visible spectrum cameras without any interference coming from the projector. The assembly of both cameras and the synchronization circuit is not only capable of performing an ad-hoc version of a depth camera, but it also introduces the recognition and tracking of textured planar objects, even when contents are projected over them. On the other hand HumanTop supports the tracking of sheets of paper and ID-code markers. This set of features makes the HumanTop a comprehensive, intuitive and versatile augmented tabletop that provides multitouch interaction with projective augmented reality on any flat surface. As an example to exploit all the capabilities of HumanTop, an educational application has been developed using an augmented book as a launcher to different didactic contents. A pilot study in which 28 fifth graders participated is presented. Results about efficiency, usability/satisfaction and motivation are provided. These results suggest that HumanTop is an interesting platform for the development of educational contents. © 2012 Springer Science+Business Media, LLC.This study was funded by Ministerio de Educacion y Ciencia Spain, Project SALTET (TIN2010-21296-C02-01), Project Game Teen (TIN2010-20187) projects Consolider-C (SEJ2006-14301/PSIC), "CIBER of Physiopathology of Obesity and Nutrition, an initiative of ISCIII" and Excellence Research Program PROMETEO (Generalitat Valenciana. Conselleria de Educacio, 2008-157).Soto Candela, E.; Ortega PĂ©rez, M.; MarĂ­n Romero, C.; PĂ©rez LĂłpez, DC.; Salvador Herranz, GM.; Contero, M.; Alcañiz Raya, ML. (2014). HumanTop: a multi-object tracking tabletop. Multimedia Tools and Applications. 70(3):1837-1868. https://doi.org/10.1007/s11042-012-1193-yS18371868703Agarwal A, Izadi S, Chandraker M, Blake A (2007) High precision multi-touch sensing on surfaces using overhead cameras. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 197–200Alexa M, Bollensdorff B, Bressler I, Elstner S, Hahne U, Kettlitz N, Lindow N, Lubkoll R, Richter R, Stripf C et al (2008) Continuous reference images for ftir touch sensing. In: ACM SIGGRAPH poster. ACM, p 49Argyros A, Lourakis M (2006) Vision-based interpretation of hand gestures for remote control of a computer mouse. In: Comput. vis. hum.-comput. interact., pp 40–51Barnes C, Jacobs D, Sanders J, Goldman D, Rusinkiewicz S, Finkelstein A, Agrawala M (2008) Video puppetry: a performative interface for cutout animation. ACM Trans Graph (TOG) 27:124Bradski G, Kaehler A (2008) Learning OpenCV: computer vision with the OpenCV library. O’Reilly MediaCampbell D, Stanley J, Gage N (1963) Experimental and quasi-experimental designs for research. Houghton Mifflin, BostonChen D, Zhang G (2005) A new sub-pixel detector for x-corners in camera calibration targets. In: 13th int. conf. cent. Eur. comput. graph., vis. comput. vis.Dietz P, Leigh D (2001) Diamondtouch: a multi-user touch technology. In: Proc. 14th ACM symp. user interface softw. technol. ACM, pp 219–226Do-Lenh S, Kaplan F, Sharma A, Dillenbourg P (2009) Multi-finger interactions with papers on augmented tabletops. In: Proc. 3rd int. conf. tangible embed. int. ACM, pp 267–274Dung L, Mizukawa M (2009) Fast hand feature extraction based on connected component labeling, distance transform and hough transform. J. Robot. Mechatronics 21(6):726–738Echtler F, Sielhorst T, Huber M, Klinker G (2009) A short guide to modulated light. In: Proc. 3rd int. conf. tang. embed. interact. ACM, pp 393–396Echtler F, Pototschnig T, Klinker G (2010) An led-based multitouch sensor for lcd screens. In: Proc. 4th int. conf. tang. embed. interact.. ACM, pp 227–230Han J (2005) Low-cost multi-touch sensing through frustrated total internal reflection. In: Proc. 18th ACM symp. user interface softw. technol. ACM, pp 115–118Holman D, Vertegaal R, Altosaar M, Troje N, Johns D (2005) Paper windows: interaction techniques for digital paper. In: Proc. SIGCHI conf. hum. factor comput. syst. ACM, pp 591–599Izadi S, Agarwal A, Criminisi A, Winn J, Blake A, Fitzgibbon A (2007) C-slate: a multi-touch and object recognition system for remote collaboration using horizontal surfaces. In: IEEE int. workshop horiz. interact. hum.-comput. interact., TABLETOP’07. IEEE, pp 3–10JordĂ  S, Geiger G, Alonso M, Kaltenbrunner M (2007) The reactable: exploring the synergy between live music performance and tabletop tangible interfaces. In: Proc. 1st int. conf. tangible embed. interact. ACM, pp 139–146Kaltenbrunner M (2009) Reactivision and tuio: a tangible tabletop toolkit. In: Proc. ACM int. conf. interact. tabletop. surf. ACM, pp 9–16Katz I, Gabayan K, Aghajan H (2007) A multi-touch surface using multiple cameras. In: Proc. 9th int. conf. adv. concept. intell. vis. syst.. Springer, pp 97–108Kim K, Lepetit V, Woo W (2010) Scalable real-time planar targets tracking for digilog books. Vis Comput 26(6):1145–1154Lee T, Hollerer T (2007) Handy ar: markerless inspection of augmented reality objects using fingertip tracking. In: 11th IEEE int. symp. wearable comput. IEEE, pp 83–90Letessier J, BĂ©rard F (2004) Visual tracking of bare fingers for interactive surfaces. In: Proc. 17th ACM symp. user interface softw. technol. ACM, pp 119–122Likert R (1932) A technique for the measurement of attitudes. Arch Psychol 140:1–55Lucchese L, Mitra S (2002) Using saddle points for subpixel feature detection in camera calibration targets. In: Asian-Pac. conf. circuit. syst., vol 2. IEEE, pp 191–195Malik S, Laszlo J (2004) Visual touchpad: a two-handed gestural input device. In: Proc. 6th int. conf. multimodal interface. ACM, pp 289–296Manresa C, Varona J, Mas R, Perales F (2000) Real–time hand tracking and gesture recognition for human-computer interaction. In: Comput. vis. cent., pp 1–7MartĂ­n-GutiĂ©rrez J, LuĂ­s SaorĂ­n J, Contero M, Alcañiz M, PĂ©rez-LĂłpez D, Ortega M (2010) Design and validation of an augmented book for spatial abilities development in engineering students. Comput Graph 34(1):77–91McNaughton J (2010) Utilising emerging multi-touch table designs. Durham UniversityMicrosoft (2011) Microsoft surface. URL http://www.microsoft.com/surface/Muja M, Lowe D (2009) Fast approximate nearest neighbors with automatic algorithm configuration. In: Int. conf. comput. vis. theory appl. VISSAPP, pp 331–340Nister D, Stewenius H (2006) Scalable recognition with a vocabulary tree. In: IEEE Comput. Soc. conf. comput. vis. pattern recognit., vol 2. IEEE, pp 2161–2168Oka K, Sato Y, Koike H (2002) Real-time fingertip tracking and gesture recognition. IEEE Comput Graph 22(6):64–71OpenSource (2011) Fast sift image features library. URL http://libsift.sourceforge.net/Peer P, Kovac J, Solina F (2003) Human skin color clustering for face detection, vol 2. IEEEPilet J, Saito H (2010) Virtually augmenting hundreds of real pictures: an approach based on learning, retrieval, and tracking. In: IEEE virtual real. conf. (VR). IEEE, pp 71–78Rekimoto J (2002) Smartskin: an infrastructure for freehand manipulation on interactive surfaces. In: Proc. SIGCHI conf. hum. factor. comput. syst.. ACM, pp 113–120Shi J, Tomasi C (1994) Good features to track. In: IEEE comput. soc. conf. proc. comput. vis. pattern recognit. IEEE, pp 593–600Tomasi C, Kanade T (1991) Detection and tracking of point features. School of Computer Science, Carnegie Mellon UniversityVerdiĂ© Y (2008) Evolution of hand tracking algorithms to mirrortrack. Tech. Rep. Vis. Interface Syst. Lab.Vos N, van der Meijden H, Denessen E (2011) Effects of constructing versus playing an educational game on student motivation and deep learning strategy use. Comput Educ 56(1):127–137Wagner D, Reitmayr G, Mulloni A, Drummond T, Schmalstieg D (2010) Real-time detection and tracking for augmented reality on mobile phones. IEEE Trans Vis Comput Graph 16(3):355–368Welch G, Bishop G (1995) An introduction to the Kalman filter. University of North Carolina at Chapel Hill, CiteseerWilson A (2004) Touchlight: an imaging touch screen and display for gesture-based interaction. In: Proc. 6th int. conf. multimodal interface. ACM, pp 69–76Wilson A (2005) Playanywhere: a compact interactive tabletop projection-vision system. In: Proc. 18th ACM symp user interface softw. technol. ACM, pp 83–92Wilson A (2010) Using a depth camera as a touch sensor. In: ACM int. conf. interact. tabletop. surf. ACM, pp 69–72Zerofrog (2011) Libsiftfast. URL http://sourceforge.net/projects/libsiftZhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334Zhang Z, Wu Y, Shan Y, Shafer S (2001) Visual panel: virtual mouse, keyboard and 3d controller with an ordinary piece of paper. In: Proc. workshop percept. user interface. ACM, pp 1–
    corecore