12,317 research outputs found

    Augmented Reality: Application to In-Vehicle Navigation

    Get PDF
    Even with today’s technically advanced navigation systems, user experience situations where announcements are difficult to understand and misleading. Augmented reality – the integration of computer generated content into the vehicle surrounding – can provide an intuitive and unambiguous way to communicate navigation information; it can even serve as a novel user interface that allows interaction with the surrounding. In this paper, challenges, constraints, and possible solutions for AR in-vehicle applications are discussed. Details of the technical and design decisions of the “first in-vehicle augmented video system” are explained, as well as features and possible future upgrades.   &nbsp

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Fully Automated Texture Tracking Based on Natural Features Extraction and Template Matching

    Get PDF
    ACE 134In this work we propose a novel approach to real-time texture tracking and registration, based on natural feature extraction from planar objects and template matching, Our method is oriented to planar objects with arbitrary textures but with rectangular topologies and well contrasted contours and does not require any external fiducial marker, either for the set-up or the tracking phases. Once the initial pose condition is obtained, previous planar object information is used to compute subsequent planar object’s pose, so that the time coherence of the input video stream is exploited. Our system is completely automated and produces real-time efficient tracking which can be applied to entertainment AR applications and other. The paper discusses also the novelty of the approach, in relation to other existing texture tracking algorithms.ADETTI/ISCT

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    A 4D information system for the exploration of multitemporal images and maps using photogrammetry, web technologies and VR/AR

    Full text link
    [EN] This contribution shows the comparison, investigation, and implementation of different access strategies on multimodal data. The first part of the research is structured as a theoretical part opposing and explaining the terms of conventional access, virtual archival access, and virtual museums while additionally referencing related work. Especially, issues that still persist in repositories like the ambiguity or missing of metadata is pointed out. The second part explains the practical implementation of a workflow from a large image repository to various four-dimensional applications. Mainly, the filtering of images and in the following, the orientation of images is explained. Selection of the relevant images is partly done manually but also with the use of deep convolutional neural networks for image classification. In the following, photogrammetric methods are used for finding the relative orientation between image pairs in a projective frame. For this purpose, an adapted Structure from Motion (SfM) workflow is presented, in which the step of feature detection and matching is replaced by the Radiant-Invariant Feature Transform (RIFT) and Matching On Demand with View Synthesis (MODS). Both methods have been evaluated on a benchmark dataset and performed superior than other approaches. Subsequently, the oriented images are placed interactively and in the future automatically in a 4D browser application showing images, maps, and building models Further usage scenarios are presented in several Virtual Reality (VR) and Augmented Reality (AR) applications. The new representation of the archival data enables spatial and temporal browsing of repositories allowing the research of innovative perspectives and the uncovering of historical details.Highlights:Strategies for a completely automated workflow from image repositories to four-dimensional (4D) access approaches.The orientation of historical images using adapted and evaluated feature matching methods.4D access methods for historical images and 3D models using web technologies and Virtual Reality (VR)/Augmented Reality (AR).[ES] Esta contribución muestra la comparación, investigación e implementación de diferentes estrategias de acceso a datos multimodales. La primera parte de la investigación se estructura en una parte teórica en la que se oponen y explican los términos de acceso convencional, acceso a los archivos virtuales, y museos virtuales, a la vez que se hace referencia a trabajos relacionados. En especial, se señalan los problemas que aún persisten en los repositorios, como la ambigüedad o la falta de metadatos. La segunda parte explica la implementación práctica de un flujo de trabajo desde un gran repositorio de imágenes a varias aplicaciones en cuatro dimensiones (4D). Principalmente, se explica el filtrado de imágenes y, a continuación, la orientación de las mismas. La selección de las imágenes relevantes se hace en parte manualmente, pero también con el uso de redes neuronales convolucionales profundas para la clasificación de las imágenes. A continuación, se utilizan métodos fotogramétricos para encontrar la orientación relativa entre pares de imágenes en un marco proyectivo. Para ello, se presenta un flujo de trabajo adaptado a partir de Structure from Motion, (SfM), en el que el paso de la detección y la correspondencia de entidades es sustituido por la Transformación de entidades invariante a la radiancia (Radiant-Invariant Feature Transform, RIFT) y la Correspondencia a demanda con vistas sintéticas (Matching on Demand with View Synthesis, MODS). Ambos métodos han sido evaluados sobre la base de un conjunto de datos de referencia y funcionaron mejor que otros procedimientos. Posteriormente, las imágenes orientadas se colocan interactivamente y en el futuro automáticamente en una aplicación de navegador 4D que muestra imágenes, mapas y modelos de edificios. Otros escenarios de uso se presentan en varias aplicación es de Realidad Virtual (RV) y Realidad Aumentada (RA). La nueva representación de los datos archivados permite la navegación espacial y temporal de los repositorios, lo que permite la investigación en perspectivas innovadoras y el descubrimiento de detalles históricos.The research upon which this paper is based is part of the junior research group UrbanHistory4D’s activities which has received funding from the German Federal Ministry of Education and Research under grant agreement No 01UG1630. This work was supported by the German Federal Ministry of Education and Research (BMBF, 01IS18026BA-F) by funding the competence center for Big Data “ScaDS Dresden/Leipzig”.Maiwald, F.; Bruschke, J.; Lehmann, C.; Niebling, F. (2019). Un sistema de información 4D para la exploración de imágenes y mapas multitemporales utilizando fotogrametría, tecnologías web y VR/AR. Virtual Archaeology Review. 10(21):1-13. https://doi.org/10.4995/var.2019.11867SWORD1131021Ackerman, A., & Glekas, E. (2017). Digital Capture and Fabrication Tools for Interpretation of Historic Sites. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W2, 107-114. doi:10.5194/isprs-annals-IV-2-W2-107-2017Armingeon, M., Komani, P., Zanwar, T., Korkut, S., & Dornberger, R. (2019). A Case Study: Assessing Effectiveness of the Augmented Reality Application in Augusta Raurica Augmented Reality and Virtual Reality (pp. 99-111): Springer.Artstor. (2019). Artstor Digital Library. Retrieved April 30, 2019, from https://library.artstor.orgBay, H., Tuytelaars, T., & Van Gool, L. (2006). SURF: Speeded Up Robust Features. Paper presented at the European Conference on Computer Vision, Berlin, Heidelberg.Beaudoin, J. E., & Brady, J. E. (2011). Finding visual information: a study of image resources used by archaeologists, architects, art historians, and artists. Art Documentation: Journal of the Art Libraries Society of North America, 30(2), 24-36.Beltrami, C., Cavezzali, D., Chiabrando, F., Iaccarino Idelson, A., Patrucco, G., & Rinaudo, F. (2019). 3D Digital and Physical Reconstruction of a Collapsed Dome using SFM Techniques from Historical Images. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W11, 217-224. doi:10.5194/isprs-archives-XLII-2-W11-217-2019Bevilacqua, M. G., Caroti, G., Piemonte, A., & Ulivieri, D. (2019). Reconstruction of lost Architectural Volumes by Integration of Photogrammetry from Archive Imagery with 3-D Models of the Status Quo. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W9, 119-125. doi:10.5194/isprs-archives-XLII-2-W9-119-2019Bitelli, G., Dellapasqua, M., Girelli, V. A., Sbaraglia, S., & Tinia, M. A. (2017). Historical Photogrammetry and Terrestrial Laser Scanning for the 3d Virtual Reconstruction of Destroyed Structures: A Case Study in Italy. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-5/W1, 113-119. doi:10.5194/isprs-archives-XLII-5-W1-113-2017Bruschke, J., Niebling, F., Maiwald, F., Friedrichs, K., Wacker, M., & Latoschik, M. E. (2017). Towards browsing repositories of spatially oriented historic photographic images in 3D web environments. Paper presented at the Proceedings of the 22nd International Conference on 3D Web Technology.Bruschke, J., Niebling, F., & Wacker, M. (2018). Visualization of Orientations of Spatial Historical Photographs. Paper presented at the Eurographics Workshop on Graphics and Cultural Heritage.Bruschke, J., & Wacker, M. (2014). Application of a Graph Database and Graphical User Interface for the CIDOC CRM. Paper presented at the Access and Understanding-Networking in the Digital Era. Session J1. The 2014 annual conference of CIDOC, the International Committee for Documentation of ICOM.Burdea, G. C., & Coiffet, P. (2003). Virtual reality technology: John Wiley & Sons.Callieri, M., Cignoni, P., Corsini, M., & Scopigno, R. (2008). Masked photo blending: Mapping dense photographic data set on high-resolution sampled 3D models. Computers & Graphics, 32(4), 464-473.Chum, O., & Matas, J. (2005). Matching with PROSAC-progressive sample consensus. Paper presented at the Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on.Coordination and Support Action Virtual Multimodal Museum (ViMM). (2018). ViMM. Retrieved April 30, 2019, from https://www.vi-mm.eu/CultLab3D. (2019). CultLab3D. Retrieved April 30, 2019, from https://www.cultlab3d.deDeng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. Paper presented at the 2009 IEEE conference on computer vision and pattern recognition.Deutsches Archäologisches Institut (DAI). (2019). iDAI.objects arachne (Arachne). Retrieved April 30, 2019, from https://arachne.dainst.org/Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap: CRC press.Europeana. (2019). Europeana Collections. Retrieved 30.04.2019, from https://www.europeana.euEvens, T., & Hauttekeete, L. (2011). Challenges of digital preservation for cultural heritage institutions. Journal of Librarianship and Information Science, 43(3), 157-165.Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395.Fleming‐May, R. A., & Green, H. (2016). Digital innovations in poetry: Practices of creative writing faculty in online literary publishing. Journal of the Association for Information Science and Technology, 67(4), 859-873.Franken, T., Dellepiane, M., Ganovelli, F., Cignoni, P., Montani, C., & Scopigno, R. (2005). Minimizing user intervention in registering 2D images to 3D models. The visual computer, 21(8-10), 619-628.Girardi, G., von Schwerin, J., Richards-Rissetto, H., Remondino, F., & Agugiaro, G. (2013). The MayaArch3D project: A 3D WebGIS for analyzing ancient architecture and landscapes. Literary and Linguistic Computing, 28(4), 736-753. doi:10.1093/llc/fqt059Grussenmeyer, P., & Al Khalil, O. (2017). From Metric Image Archives to Point Cloud Reconstruction: Case Study of the Great Mosque of Aleppo in Syria. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-2/W5, 295-301. doi:10.5194/isprs-archives-XLII-2-W5-295-2017Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality: Springer Science & Business Media.Guttentag, D. A. (2010). Virtual reality: Applications and implications for tourism. Tourism Management, 31(5), 637-651.Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision: Cambridge university press.Koutsoudis, A., Arnaoutoglou, F., Tsaouselis, A., Ioannakis, G., & Chamzas, C. (2015). Creating 3D Replicas of Medium-to Large-Scale Monuments for Web-Based Dissemination Within the Framework of the 3D-Icons Project. CAA2015, 971.Li, J., Hu, Q., & Ai, M. (2018). RIFT: Multi-modal Image Matching Based on Radiation-invariant Feature Transform. arXiv preprint arXiv:1804.09493.Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110.Maietti, F., Di Giulio, R., Piaia, E., Medici, M., & Ferrari, F. (2018). Enhancing Heritage fruition through 3D semantic modelling and digital tools: the INCEPTION project. Paper presented at the IOP Conference Series: Materials Science and Engineering.Maiwald, F., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2018). Feature Matching of Historical Images Based on Geometry of Quadrilaterals. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-2, 643-650. doi:10.5194/isprs-archives-XLII-2-643-2018Maiwald, F., Vietze, T., Schneider, D., Henze, F., Münster, S., & Niebling, F. (2017). Photogrammetric analysis of historical image repositories for virtual reconstruction in the field of digital humanities. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 447.Matas, J., Chum, O., Urban, M., & Pajdla, T. (2004). Robust wide-baseline stereo from maximally stable extremal regions. Image and Vision Computing, 22(10), 761-767.Melero, F. J., Revelles, J., & Bellido, M. L. (2018). Atalaya3D: making universities' cultural heritage accessible through 3D technologies.Milgram, P., Takemura, H., Utsumi, A., & Kishino, F. (1995). Augmented reality: A class of displays on the reality-virtuality continuum. Paper presented at the Telemanipulator and telepresence technologies.Mishkin, D., Matas, J., & Perdoch, M. (2015). MODS: Fast and robust method for two-view matching. Computer Vision and Image Understanding, 141, 81-93.Moulon, P., Monasse, P., & Marlet, R. (2012). Adaptive structure from motion with a contrario model estimation. Paper presented at the Asian Conference on Computer Vision.Münster, S., Kamposiori, C., Friedrichs, K., & Kröber, C. (2018). Image libraries and their scholarly use in the field of art and architectural history. International journal on digital libraries, 19(4), 367-383.Niebling, F., Bruschke, J., & Latoschik, M. E. (2018). Browsing Spatial Photography for Dissemination of Cultural Heritage Research Results using Augmented Models.Niebling, F., Maiwald, F., Barthel, K., & Latoschik, M. E. (2017). 4D Augmented City Models, Photogrammetric Creation and Dissemination Digital Research and Education in Architectural Heritage (pp. 196-212). Cham: Springer International Publishing.Oliva, L. S., Mura, A., Betella, A., Pacheco, D., Martinez, E., & Verschure, P. (2015). Recovering the history of Bergen Belsen using an interactive 3D reconstruction in a mixed reality space the role of pre-knowledge on memory recollection. Paper presented at the 2015 Digital Heritage.Pani Paudel, D., Habed, A., Demonceaux, C., & Vasseur, P. (2015). Robust and optimal sum-of-squares-based point-to-plane registration of image sets and structured scenes. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision.Ross, S., & Hedstrom, M. (2005). Preservation research and sustainable digital libraries. International journal on digital libraries, 5(4), 317-324.Schindler, G., & Dellaert, F. (2012). 4D Cities: Analyzing, Visualizing, and Interacting with Historical Urban Photo Collections. Journal of Multimedia, 7(2), 124-131.Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision.Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.Slater, M., & Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI, 3, 74.Styliani, S., Fotis, L., Kostas, K., & Petros, P. (2009). Virtual museums, a survey and some issues for consideration. Journal of cultural Heritage, 10(4), 520-528.Tschirschwitz, F., Büyüksalih, G., Kersten, T., Kan, T., Enc, G., & Baskaraca, P. (2019). Virtualising an Ottoman Fortress - Laser Scanning and 3D Modelling for the Development of an Interactive, Immersive Virtual Reality Application. International archives of the photogrammetry, remote sensing and spatial information sciences, 42(2/W9).Web3D Consortium. (2019). Open Standards for Real-Time 3D Communication. Retrieved April 30, 2019, from http://www.web3d.org/Wu, C. (2013). Towards linear-time incremental structure from motion. Paper presented at the 3D Vision-3DV 2013, 2013 International conference on.Wu, Y., Ma, W., Gong, M., Su, L., & Jiao, L. (2015). A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sensing Lett., 12(1), 43-47.Yoon, J., & Chung, E. (2011). Understanding image needs in daily life by analyzing questions in a social Q&A site. Journal of the American Society for Information Science and Technology, 62(11), 2201-2213

    AMRA: Augmented Reality Assistance for Train Maintenance Tasks

    Get PDF
    International audienceThe AMRA project, carried out by a consortium including industrials and research partners, aims at implementing an Augmented Reality (AR) system for mobile use in industrial applications such as train maintenance and repairs in industrial sites. The adopted solution is a video see-through system where a tablet-PC is used as an augmented window. The overall architecture of a prototype is unfolded, and its key points are detailed. For instance, a visual registration system has been developed to accurately overlay a video stream with information. A robust, real time registration, using a single camera tied to the tablet-PC, is performed. Besides, a hierarchical description of maintenance procedure is set up and enriched by new media such as photos, video and/or 3D models. These 3D models have been specially tailored to meet maintenance tasks requirements. The obtained multimedia contents allow easy access to technical documentation through a man machine interface managing a multimedia engine. All these features have been combined in the AMRA prototype which have been evaluated by a maintenance operator
    corecore