55,036 research outputs found
Protecting Visual Information in Augmented Reality from Malicious Application Developers
abstract: Visual applications – those that use camera frames as part of the application – provide a rich, context-aware experience. The continued development of mixed and augmented reality (MR/AR) computing environments furthers the richness of this experience by providing applications a continuous vision experience, where visual information continuously provides context for applications and the real world is augmented by the virtual. To understand user privacy concerns in continuous vision computing environments, this work studies three MR/AR applications (augmented markers, augmented faces, and text capture) to show that in a modern mobile system, the typical user is exposed to potential mass collection of sensitive information, posing privacy and security deficiencies to be addressed in future systems.
To address such deficiencies, a development framework is proposed that provides resource isolation between user information contained in camera frames and application access to the network. The design is implemented using existing system utilities as a proof of concept on the Android operating system and demonstrates its viability with a modern state-of-the-art augmented reality library and several augmented reality applications. Evaluation is conducted on the design on a Samsung Galaxy S8 phone by comparing the applications from the case study with modified versions which better protect user privacy. Early results show that the new design efficiently protects users against data collection in MR/AR applications with less than 0.7% performance overhead.Dissertation/ThesisMasters Thesis Computer Engineering 201
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
The Limited Effect of Graphic Elements in Video and Augmented Reality on Children’s Listening Comprehension
There is currently significant interest in the use of instructional strategies in learning environments thanks to the emergence of new multimedia systems that combine text, audio, graphics and video, such as augmented reality (AR). In this light, this study compares the effectiveness of AR and video for listening comprehension tasks. The sample consisted of thirty-two elementary school students with different reading comprehension. Firstly, the experience, instructions and objectives were introduced to all the students. Next, they were divided into two groups to perform activities—one group performed an activity involving watching an Educational Video Story of the Laika dog and her Space Journey available by mobile devices app Blue Planet Tales, while the other performed an activity involving the use of AR, whose contents of the same history were visualized by means of the app Augment Sales. Once the activities were completed participants answered a comprehension test. Results (p = 0.180) indicate there are no meaningful differences between the lesson format and test performance. But there are differences between the participants of the AR group according to their reading comprehension level. With respect to the time taken to perform the comprehension test, there is no significant difference between the two groups but there is a difference between participants with a high and low level of comprehension. To conclude SUS (System Usability Scale) questionnaire was used to establish the measure usability for the AR app on a smartphone. An average score of 77.5 out of 100 was obtained in this questionnaire, which indicates that the app has fairly good user-centered design
Securing Interactive Sessions Using Mobile Device through Visual Channel and Visual Inspection
Communication channel established from a display to a device's camera is
known as visual channel, and it is helpful in securing key exchange protocol.
In this paper, we study how visual channel can be exploited by a network
terminal and mobile device to jointly verify information in an interactive
session, and how such information can be jointly presented in a user-friendly
manner, taking into account that the mobile device can only capture and display
a small region, and the user may only want to authenticate selective
regions-of-interests. Motivated by applications in Kiosk computing and
multi-factor authentication, we consider three security models: (1) the mobile
device is trusted, (2) at most one of the terminal or the mobile device is
dishonest, and (3) both the terminal and device are dishonest but they do not
collude or communicate. We give two protocols and investigate them under the
abovementioned models. We point out a form of replay attack that renders some
other straightforward implementations cumbersome to use. To enhance
user-friendliness, we propose a solution using visual cues embedded into the 2D
barcodes and incorporate the framework of "augmented reality" for easy
verifications through visual inspection. We give a proof-of-concept
implementation to show that our scheme is feasible in practice.Comment: 16 pages, 10 figure
- …