540 research outputs found

    Web-based indoor positioning system using QR-codes as markers

    Get PDF
    Location tracking has been quite an important tool in our daily life. The outdoor location tracking can easily be supported by GPS. However, the technology of tracking smart device users indoor position is not at the same maturity level as outdoor tracking. AR technology could enable the tracking on users indoor location by scanning the AR marker with their smart devices. However, due to several limitations (capacity, error tolerance, etc.) AR markers are not widely adopted. Therefore, not serving as a good candidate to be a tracking marker. This paper carries out a research question whether QR code can replace the AR marker as the tracking marker to detect smart devices’ user indoor position. The paper has discussed the research question by researching the background of the QR code and AR technology. According to the research, QR code should be a suitable choice to implement as a tracking marker. Comparing to the AR marker, QR code has a better capacity, higher error tolerance, and widely adopted. Moreover, a web application has also been implemented as an experiment to support the research question. It utilized QR code as a tracking marker for AR technology which built a 3D model on the QR code. Hence, the position of the user can be estimated from the 3D model. This paper discusses the experiment result by comparing a pre-fixed target user’s position and real experiment position with three different QR code samples. The limitation of the experiment and improvement ideas have also been discussed in this paper. According to the experiment, the research question has being answered that a combination of QR code and AR technology could deliver a satisfying indoor location result in a smart device user

    Smart Shopping Assistant: A Multimedia and Social Media Augmented System with Mobile Devices to Enhance Customers’ Experience and Interaction

    Get PDF
    Multimedia, social media content, and interaction are common means to attract customers in shopping. However these features are not always fully available for customers when they go shopping in physical shopping centers. The authors propose Smart Shopping Assistant, a multimedia and social media augmented system on mobile devices to enhance users’ experience and interaction in shopping. Smart Shopping turns a regular mobile device into a special prism so that a customer can enjoy multimedia, get useful social media related to a product, give feedbacks or make actions on a product during shopping. The system is specified as a flexible framework to take advantages of different visual descriptors and web information extraction modules. Experimental results show that Smart Shopping can process and provide augmented data in a realtime-manner. Smart Shopping can be used to attract more customers and to build an online social community of customers to share their interests in shopping

    A Smart Assistant for Visual Recognition of Painted Scenes

    Get PDF
    Nowadays, smart devices allow people to easily interact with the surrounding environment thanks to existing communication infrastructures, i.e., 3G/4G/5G or WiFi. In the context of a smart museum, data shared by visitors can be used to provide innovative services aimed to improve their cultural experience. In this paper, we consider as case study the painted wooden ceiling of the Sala Magna of Palazzo Chiaramonte in Palermo, Italy and we present an intelligent system that visitors can use to automatically get a description of the scenes they are interested in by simply pointing their smartphones to them. As compared to traditional applications, this system completely eliminates the need for indoor positioning technologies, which are unfeasible in many scenarios as they can only be employed when museum items are physically distinguishable. Experimental analysis aimed to evaluate the performance of the system in terms of accuracy of the recognition process, and the obtained results show its effectiveness in a real-world application scenario

    A Framework for Mobile Augmented Reality in Urban Maintenance

    Get PDF
    Mobile handheld devices such as smartphones have become increasingly powerful in modern times. Because of this, there has been a surge in 3D graphics-heavy mobile applications that aim to provide immersive experiences. An example of this phenomenon would be Augmented Reality (AR) applications, which have been increasingly popular and offer a wide array of use-cases. The ability to merge the real world with the virtual world seamlessly using the built-in camera of the smartphone brings a whole new world of possibilities, which makes it interesting to explore how such a technology could be used to solve real-world problems. This dissertation focuses on applying this technology in the field of urban maintenance. To do so, a mobile AR application was developed, designed to be used by urban maintenance workers as a field-assistance tool. Using any standard smartphone camera, the developed system can accurately detect any equipment and augment it with relevant information and step-by-step instructions on how to do any required maintenance jobs. Alongside this mobile application, a desktop application was also developed with the purposes of creating and authoring the data and augmentations that should be displayed during a given job, called Archer. Lastly, this dissertation proposes a novel approach to automatically detect and minimize the amount of points (checkpoints) at which the application will ask the user to perform a new equipment recognition, which are useful in order to maintain tracking stability as the user modifies the real-world object during the course of the job. The experiments and user tests conducted during the final stages of this dissertation demonstrate the accuracy and practicality of the developed systems, proving that they can effectively be used to greatly improve the workflow of urban maintenance workers.Dispositivos móveis tais como os smartphones têm-se tornado cada vez mais poderosos nos tempos modernos. Como tal, tem havido um grande aumento na quantidade de aplicações móveis 3D com o intuito de fornecer experiências imersivas. Um exemplo desse fenómeno são as aplicações de Realidade Aumentada (RA), as quais se têm tornado cada vez mais populares, oferecendo um vasto leque de casos de uso. A habilidade de fundir o mundo real com o mundo virtual através da câmara de um smartphone traz todo um novo mundo de possibilidades, o que torna interessante a exploração de como esta tecnologia pode ser usada para resolver problemas no mundo real. Esta dissertação foca-se na aplicação desta tecnologia na área da manutenção urbana. Nesse sentido, foi desenvolvida uma aplicação móvel de RA projetada para ser usada como uma ferramenta de assistência em campo por trabalhadores de manutenção urbana. Usando qualquer câmara de smartphone, este sistema consegue detetar qualquer equipamento de forma precisa e aumentá-lo digitalmente com informação relevante e instruções passo-a-passo de como fazer qualquer trabalho de manutenção. Juntamente com esta aplicação móvel, também foi desenvolvida uma aplicação para desktop — chamada Archer — com o intuito de criar e validar os dados e os objetos digitais que serão apresentados na aplicação mobile durante o curso de um trabalho de manutenção. Por fim, esta dissertação apresenta uma nova solução para a deteção e minimização automática dos pontos (checkpoints) em que a aplicação móvel deverá pedir ao utilizador para efetuar um novo reconhecimento do equipamento, os quais são úteis para manter um tracking fiável e estável à medida que o utilizador vai modificando o equipamento durante um trabalho. As experiências e testes com utilizadores efetuados na fase final desta dissertação demonstram a precisão e praticidade dos sistemas desenvolvidos, provando que estes podem efetivamente ser usados para melhorar o workflow dos trabalhadores de manutenção urbana

    Application of augmented reality and robotic technology in broadcasting: A survey

    Get PDF
    As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper

    The Effect of Marker-less Augmented Reality on Task and Learning Performance

    Get PDF
    Augmented Reality (AR) technologies have evolved rapidly over the last years, particularly with regard to user interfaces, input devices, and cameras used in mobile devices for object and gesture recognition. While early AR systems relied on pre-defined trigger images or QR code markers, modern AR applications leverage machine learning techniques to identify objects in their physical environments. So far, only few empirical studies have investigated AR\u27s potential for supporting learning and task assistance using such marker-less AR. In order to address this research gap, we implemented an AR application (app)with the aim to analyze the effectiveness of marker-less AR applied in a mundane setting which can be used for on-the-job training and more formal educational settings. The results of our laboratory experiment show that while participants working with AR needed significantly more time to fulfill the given task, the participants who were supported by AR learned significantly more

    FPGA Accelerated Discrete-SURF for Real-Time Homography Estimation

    Get PDF
    This paper describes our hardware accelerated, FPGA implementation of SURF, named Discrete SURF, to support real-time homography estimation for close range aerial navigation. The SURF algorithm provides feature matches between a model and a scene which can be used to find the transformation between the camera and the model. Previous implementations of SURF have partially employed FPGAs to accelerate the feature detection stage of upright only image comparisons. We extend the work of previous implementations by providing an FPGA implementation that allows rotation during image comparisons in order to facilitate aerial navigation. We also expand beyond feature detection as the complete Discrete SURF algorithm is run on the FPGA, rather than piped into processors. This not only minimizes overhead and increases the parallelization of the algorithm, but also allows the algorithm to be easily ported to different FPGAs. Furthermore, the Discrete SURF module is a logic-only implementation that does not rely on external hardware which therefore decreases the overall size, weight and power of the device while also allowing for easy FPGA to ASIC conversion. We evaluate the Discrete SURF algorithm in terms of performance against the original SURF and upright SURF algorithms implemented in OpenCV. Finally, we show how Discrete SURF is more compatible with an aerial navigation scenario than previous works, since rotation invariance must be considered in addition to scale
    • …
    corecore