11 research outputs found

    Framework based on Mobile Augmented Reality for Translating Food Menu in Thai Language to Malay Language

    Get PDF
    Augmented reality (AR) technology is a technique that combines the real world and the virtual world digitally using mobile devices. Mobile AR technology is expected to help Malaysian tourists who have difficulties to understand the Thai language when visiting the country. Hence, a prototype called ARThaiMalay  translator was developed to translate printed Thai food menu to Malay language. The objective of this study is to design a food menu translation framework from Thai to Malay language based on mobile AR, develop a translator application and to test the effectiveness of the translator application. The prototype consists of three main components which are translation based on optical character recognition (OCR) technology, dictionary development using SQLite database  and display data from the local database. Evaluation of the developed application shows its effectiveness to perform translation of Thai text with certain features to Malay language

    SELF-LOCALIZATION METHOD BY INTEGRATING SENSORS

    Get PDF

    The usability attributes and evaluation measurements of mobile media AR (augmented reality)

    Get PDF
    This research aims to develop a tool for creating user-based design interfaces in mobile augmented reality (MAR) education. To develop a design interface evaluation tool, previous literature was examined for key design elements in the educational usage of MAR. The evaluation criteria identified were presence, affordance, and usability. The research used a focus group interview with 7 AR experts to develop a basic usability evaluation checklist, which was submitted to factor analysis for reliability by 122 experts in practice and academia. Based on this checklist, a MAR usability design interface test was conducted with seven fourth-grade elementary students. Then, it conducted follow-up structured interviews and questionnaires. This resulted in 29 questions being developed for the MAR interface design checklist.ope

    Video-Based In Situ Tagging on Mobile Phones

    Get PDF
    We propose a novel way to augment a real-world scene with minimal user intervention on a mobile phone; the user only has to point the phone camera to the desired location of the augmentation. Our method is valid for horizontal or vertical surfaces only, but this is not a restriction in practice in manmade environments, and it avoids going through any reconstruction of the 3-D scene, which is still a delicate process on a resource-limited system like a mobile phone. Our approach is inspired by recent work on perspective patch recognition, but we adapt it for better performances on mobile phones. We reduce user interaction with real scenes by exploiting the phone accelerometers to relax the need for fronto-parallel views. As a result, we can learn a planar target in situ from arbitrary viewpoints and augment it with virtual objects in real-time on a mobile phone

    Augmented Reality and Health Informatics: A Study based on Bibliometric and Content Analysis of Scholarly Communication and Social Media

    Get PDF
    Healthcare outcomes have been shown to improve when technology is used as part of patient care. Health Informatics (HI) is a multidisciplinary study of the design, development, adoption, and application of IT-based innovations in healthcare services delivery, management, and planning. Augmented Reality (AR) is an emerging technology that enhances the user’s perception and interaction with the real world. This study aims to illuminate the intersection of the field of AR and HI. The domains of AR and HI by themselves are areas of significant research. However, there is a scarcity of research on augmented reality as it applies to health informatics. Given both scholarly research and social media communication having contributed to the domains of AR and HI, research methodologies of bibliometric and content analysis on scholarly research and social media communication were employed to investigate the salient features and research fronts of the field. The study used Scopus data (7360 scholarly publications) to identify the bibliometric features and to perform content analysis of the identified research. The Altmetric database (an aggregator of data sources) was used to determine the social media communication for this field. The findings from this study included Publication Volumes, Top Authors, Affiliations, Subject Areas and Geographical Locations from scholarly publications as well as from a social media perspective. The highest cited 200 documents were used to determine the research fronts in scholarly publications. Content Analysis techniques were employed on the publication abstracts as a secondary technique to determine the research themes of the field. The study found the research frontiers in the scholarly communication included emerging AR technologies such as tracking and computer vision along with Surgical and Learning applications. There was a commonality between social media and scholarly communication themes from an applications perspective. In addition, social media themes included applications of AR in Healthcare Delivery, Clinical Studies and Mental Disorders. Europe as a geographic region dominates the research field with 50% of the articles and North America and Asia tie for second with 20% each. Publication volumes show a steep upward slope indicating continued research. Social Media communication is still in its infancy in terms of data extraction, however aggregators like Altmetric are helping to enhance the outcomes. The findings from the study revealed that the frontier research in AR has made an impact in the surgical and learning applications of HI and has the potential for other applications as new technologies are adopted

    Realidade aumentada via browser

    Get PDF
    As tecnologias têm a capacidade de nos ajudar e informar ou então entreter e deslumbrar. ARealidade Aumentada éumatecnologia que permite enriquecer o ambiente real com objectos virtuais. Ela pode ser utilizada em diversas áreas e para diversos fins. Actualmente assiste-se a uma crescente utilização desta tecnologia, por exemplo na Internet e também em dispositivos móveis. Mas a sua massificação obriga a que esta se torne cada vez mais simples para que a generalidade das pessoas a possa usar. Nesta tese apresenta-se o estudo feito sobre Realidade Aumentada e sua implementação via browser, onde se identificam as fases e os conceitos essenciais para a construção de um sistema de Realidade Aumentada. Assim foram desenvolvidas algumas aplicações de teste para a Web, as quais foram desenvolvidas usando as ferramentas PaperVision3D e FLARToolKit. Foi também criada uma aplicação de Realidade Aumentada que não recorre às tradicionais marcas fiduciais para manipular os modelos virtuais. Para isso foi necessário implementar outros métodos para posicionar e manipular os modelos virtuais, tais como a detecção da cara do utilizador e a detecção de movimentos. Com esta nova abordagem obtiveram-se resultados muito satisfatórios, tendo sido desenvolvido um jogo simples que ilustra as potencialidades desta nova abordagem. Foi ainda testada a utilização de modelos com vários níveis de detalhe neste tipo de aplicações com vista ao aumento do seu desempenho

    Video See-Through Augmented Reality Application on a Mobile Computing Platform Using Position Based Visual POSE Estimation

    Get PDF
    A technique for real time object tracking in a mobile computing environment and its application to video see-through Augmented Reality (AR) has been designed, verified through simulation, and implemented and validated on a mobile computing device. Using position based visual position and orientation (POSE) methods and the Extended Kalman Filter (EKF), it is shown how this technique lends itself to be flexible to tracking multiple objects and multiple object models using a single monocular camera on different mobile computing devices. Using the monocular camera of the mobile computing device, feature points of the object(s) are located through image processing on the display. The relative position and orientation between the device and the object(s) is determined recursively by an EKF process. Once the relative position and orientation is determined for each object, three dimensional AR image(s) are rendered onto the display as if the device is looking at the virtual object(s) in the real world. This application and the framework presented could be used in the future to overlay additional informational onto displays in mobile computing devices. Example applications include robotic aided surgery where animations could be overlaid to assist the surgeon, in training applications that could aid in operation of equipment or in search and rescue operations where critical information such as floor plans and directions could be virtually placed onto the display. Current approaches in the field of real time object tracking are discussed along with the methods used for video see-through AR applications on mobile computing devices. The mathematical framework for the real time object tracking and video see-through AR rendering is discussed in detail along with some consideration to extension to the handling of multiple AR objects. A physical implementation for a mobile computing device is proposed detailing the algorithmic approach along with design decisions. The real time object tracking and video see-through AR system proposed is verified through simulation and details around the accuracy, robustness, constraints, and an extension to multiple object tracking are presented. The system is then validated using a ground truth measurement system and the accuracy, robustness, and its limitations are reviewed. A detailed validation analysis is also presented showing the feasibility of extending this approach to multiple objects. Finally conclusions from this research are presented based on the findings of this work and further areas of study are proposed

    Interactive Remote Collaboration Using Augmented Reality

    Get PDF
    With the widespread deployment of fast data connections and availability of a variety of sensors for different modalities, the potential of remote collaboration has greatly increased. While the now ubiquitous video conferencing applications take advantage of some of these capabilities, the use of video between remote users is limited to passively watching disjoint video feeds and provides no means for interaction with the remote environment. However, collaboration often involves sharing, exploring, referencing, or even manipulating the physical world, and thus tools should provide support for these interactions.We suggest that augmented reality is an intuitive and user-friendly paradigm to communicate information about the physical environment, and that integration of computer vision and augmented reality facilitates more immersive and more direct interaction with the remote environment than what is possible with today's tools.In this dissertation, we present contributions to realizing this vision on several levels. First, we describe a conceptual framework for unobtrusive mobile video-mediated communication in which the remote user can explore the live scene independent of the local user's current camera movement, and can communicate information by creating spatial annotations that are immediately visible to the local user in augmented reality. Second, we describe the design and implementation of several, increasingly more flexible and immersive user interfaces and system prototypes that implement this concept. Our systems do not require any preparation or instrumentation of the environment; instead, the physical scene is tracked and modeled incrementally using monocular computer vision. The emerging model then supports anchoring of annotations, virtual navigation, and synthesis of novel views of the scene. Third, we describe the design, execution and analysis of three user studies comparing our prototype implementations with more conventional interfaces and/or evaluating specific design elements. Study participants overwhelmingly preferred our technology, and their task performance was significantly better compared with a video-only interface, though no task performance difference was observed compared with a ``static marker'' interface. Last, we address a particular technical limitation of current monocular tracking and mapping systems which was found to be impeding and present a conceptual solution; namely, we describe a concept and proof-of-concept implementation for automatic model selection which allows tracking and modeling to cope with both parallax-inducing and rotation-only camera movements.We suggest that our results demonstrate the maturity and usability of our systems, and, more importantly, the potential of our approach to improve video-mediated communication and broaden its applicability
    corecore