280 research outputs found
User Experience of Markerless Augmented Reality Applications in Cultural Heritage Museums: âMuseumEyeâ as a Case Study
This paper explores the User Experience (UX) of Augmented Reality applications in museums. UX as a concept is vital to effective visual communication and interpretation in museums, and to enhance usability during a museum tour. In the project âMuseumEyeâ, the augmentations generated were localized based on a hybrid system that combines of (SLAM) markerless tracking technology and the indoor Beacons or Bluetooth Low Energy (BLE). These augmentations include a combination of multimedia content and different levels of visual information that required for museum visitors. Using mobile devices to pilot this application, we developed a UX design model that has the ability to evaluate the user experience and usability of the application. This paper focuses on the multidisciplinary outcomes of the project from both a technical and museological perspective based on public responses. A field evaluation of the AR system was conducted after the UX model considered. Twenty-six participants were recruited in Leeds museum and another twenty participants in the Egyptian museum in Cairo. Results showed positive responses on experiencing the system after adopting the UX design model. This study contributes on synthesizing a UX design model for AR applications to reach the optimum levels of user interaction required that reflects ultimately on the entire museum experience
Ambient Intelligence for Next-Generation AR
Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR.Comment: This is a preprint of a book chapter which will appear in the
Springer Handbook of the Metavers
Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
Recent Developments and Future Challenges in Medical Mixed Reality
As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements
Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence
Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe
Object Registration in Semi-cluttered and Partial-occluded Scenes for Augmented Reality
This paper proposes a stable and accurate object registration pipeline for markerless augmented reality applications. We present two novel algorithms for object recognition and matching to improve the registration accuracy from model to scene transformation via point cloud fusion. Whilst the first algorithm effectively deals with simple scenes with few object occlusions, the second algorithm handles cluttered scenes with partial occlusions for robust real-time object recognition and matching. The computational framework includes a locally supported Gaussian weight function to enable repeatable detection of 3D descriptors. We apply a bilateral filtering and outlier removal to preserve edges of point cloud and remove some interference points in order to increase matching accuracy. Extensive experiments have been carried to compare the proposed algorithms with four most used methods. Results show improved performance of the algorithms in terms of computational speed, camera tracking and object matching errors in semi-cluttered and partial-occluded scenes
Keyframe Tagging: Unambiguous Content Delivery for Augmented Reality Environments
Context: When considering the use of Augmented Reality to provide navigation cues in a completely unknown environment, the content must be delivered into the environment with a repeatable level of accuracy such that the navigation cues can be understood and interpreted correctly by the user.
Aims: This thesis aims to investigate whether a still image based reconstruction of an Augmented Reality environment can be used to develop a content delivery system that providers a repeatable level of accuracy for content placement. It will also investigate whether manipulation of the properties of a Spatial Marker object is sufficient to reduce object selection ambiguity in an Augmented Reality environment.
Methods: A series of experiments were conducted to test the separate aspects of these aims. Participants were required to use the developed Keyframe Tagging tool to introduce virtual navigation markers into an Augmented Reality environment, and also to identify objects within an Augmented Reality environment that was signposted using different Virtual Spatial Markers. This tested the accuracy and repeatability of content placement of the approach, while also testing participantsâ ability to reliably interpret virtual signposts within an Augmented Reality environment. Finally the Keyframe Tagging tool was tested by an expert user against a pre-existing solution to evaluate the time savings offered by this approach against the overall accuracy of content placement.
Results: The average accuracy score for content placement across 20 participants was 64%, categorised as âGoodâ when compared with an expert benchmark result, while no tags were considered âincorrectâ and only 8 from 200 tags were considered to have âPoorâ accuracy, supporting the Keyframe Tagging approach. In terms of object identification from virtual cues, some of the predicted cognitive links between virtual marker property and target object did not surface, though participants reliably identified the correct objects across several trials.
Conclusions: This thesis has demonstrated that accurate content delivery can be achieved through the use of a still image based reconstruction of an Augmented Reality environment. By using the Keyframe Tagging approach, content can be placed quickly and with a sufficient level of accuracy to demonstrate its utility in the scenarios outlined within this thesis. There are some observable limitations to the approach, which are discussed with the proposals for further work in this area
Industrial Augmented Reality As An Approach For Device Identification Within A Manufacturing Plant For Property Alteration Purposes
ThesisThe introduction of 3D computer graphics has led to an increase in the processing capacity of the computational units monumentally, along with speed, memory and transmission bandwidth. Augmented Reality (AR) has modelled remarkable progress towards real-world consumer applications. Considering the fact that mass production occurs daily in the manufacturing plants with large sums of wastage, caused either by human error, load-shedding (power outage), machine malfunction, or the time it takes the engineers to identify and fix the problem, are observed in high volumes.
Therefore, the need to identify strategies and solutions to reduce such problems on-site with accurate data, rather than outsourcing or depending solely on the Supervisory Control and Data Acquisition (SCADA) system data, which might damage the integrity and economy of the manufacturing plant, needs to be developed and implemented.
In a controlled network, identification and detection of a component in the process are difficult without prior knowledge and background in the design and implementation process.
Thus, the concept of device identification with the aid of augmented reality, utilising markerless identifiers, such as machine vision, other than Quick Response Codes (QR codes) or Radio Frequency Identification (RFID), needs to be investigated.
It is because of such reasons that the deployment of new types of technologies, such as âaugmented realityâ and âmachine visionâ need to further be investigated to obtain the device details, based on their positions and features within the indoor manufacturing plant to procure and commercialise this solution technology.
This study proposes an optimal and efficient model, utilising machine vision application to detect and identify devices, based on their positions and features within the manufacturing plant with the aid of an augmented reality application for extending the device details.
The study has outlined a machine vision application developed for object detection, based on colour and shape. Additionally, another method based on the augmented reality application was developed for the identification and augmentation of device details, based on the feature and position of the device within the indoor manufacturing plant. The study proved to be very successful in the identification and detection of objects, making use of machine vision algorithms, namely colour, shape and Canny Edge detection and the identification of devices (robotic arm and motors), based on their features and position within an indoor manufacturing environment set-up.
For the optimal efficiency of this model, the Simultaneous Localisation and Mapping (SLAM) algorithm (ORB-SLAM) was used, in conjunction with the bundle adjustment algorithm as an alternative solution in the absence of the user built-in maps for the calculation of the device positions, based on the uncertainties of the exact locations within the indoor manufacturing environment set-up.
However, some of the shortcomings were identified and addressed, such as the communication speed and the roomâs light conditions, which impacted the sensing of the camera to detect the correct objects. These shortcomings were, however, addressed by conducting two studies, namely the day and night study to compare the best light settings and also to reduce the distance between the devices and the AR application to compensate for the communication speed issues.
The scientific contribution of this study is the recognition of components by means of vision identification within such a process within an indoor manufacturing set-up. By means of identification, the user will have the capability to view and adjust the parameters of the process in a scaled plant. This contribution makes use of a modelled JPEG image. An AR image that the user can identify the devices apart from, relying on the SCADA system alone, was physically modelled on Blender3D for utilisation in Unity3D, as opposed to utilisation of any image and referencing it which would make the process tedious and reduce the processing speed. Subsequently, it has been depicted as part of a new knowledge contribution, that the identification of the devices can be achieved by placing the smartphone at any angle of the device (robotic arm or motor), and the detection and augmentation will be achieved without any change in the settings.
As part of result validation, a video was taken and uploaded on YouTube to receive a user perspective on the developed AR application. After the video upload, a survey was shared with 20 individuals, together with the YouTube link to indicate a broader base evaluation. However, the results came back positive with the majority of the sample individuals recommending the adoption of the application and its utilisation in the scaled manufacturing plant.
In addition to the results verification, a SCADA model was developed in National InstrumentsTM LabviewTM and was integrated with the AR application for evaluation purposes. The results showed that the AR application doesnât require any alteration, despite utilising a different SCADA model in different software applications, provided that the array index is the same. Only when the array index differs, is it that alterations are necessary utilising the AR application in order to have the same array elements and avoid having a null index that might cause the application to crash or not to debug. It is therefore noted that the AR application is compatible and reliable for integration with other SCADA models without alteration requirements.
The entire work outlined in this thesis was validated by two sets of physical experiments, namely GPS-based detection, and the ORB-SLAM, integrated with the Bundle Adjustment algorithm for feature and position detection. However, despite the prior knowledge of the GPS's inconsistent operation within a scaled indoor environment, it was necessary to perform the test to obtain more insight into this inconsistency and inaccurate data results
- âŠ