8,791 research outputs found
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens
rendering, which turns only a fraction of the user's real environment into AR
while the rest of the environment remains unaffected. Since handheld AR devices
are commonly equipped with video see-through capabilities, AR magic lens
applications often suffer from spatial distortions, because the AR environment
is presented from the perspective of the camera of the mobile device. Recent
approaches counteract this distortion based on estimations of the user's head
position, rendering the scene from the user's perspective. To this end,
approaches usually apply face-tracking algorithms on the front camera of the
mobile device. However, this demands high computational resources and therefore
commonly affects the performance of the application beyond the already high
computational load of AR applications. In this paper, we present a method to
reduce the computational demands for user perspective rendering by applying
lightweight optical flow tracking and an estimation of the user's motion before
head tracking is started. We demonstrate the suitability of our approach for
computationally limited mobile devices and we compare it to device perspective
rendering, to head tracked user perspective rendering, as well as to fixed
point of view user perspective rendering
Recommended from our members
Digital impact, crossover technologies and gambling practices
At this juncture it is instructive to review the convergent media forms as a starting point for a wider debate about the pervasiveness of games technologies and gambling practices. In particular, to focus upon convergences between gambling and gaming and while highlighting the advantages, to examine some of the potential concerns that may arise. It is notable that gaming is becoming a powerful and popular media form, to the extent that some games are being considered as interfaces to a wide range of digital and multimedia content
Factors influencing visual attention switch in multi-display user interfaces: a survey
Multi-display User Interfaces (MDUIs) enable people to take advantage of the different characteristics of different display categories. For example, combining mobile and large displays within the same system enables users to interact with user interface elements locally while simultaneously having a large display space to show data. Although there is a large potential gain in performance and comfort, there is at least one main drawback that can override the benefits of MDUIs: the visual and physical separation between displays requires that users perform visual attention switches between displays. In this paper, we present a survey and analysis of existing data and classifications to identify factors that can affect visual attention switch in MDUIs. Our analysis and taxonomy bring attention to the often ignored implications of visual attention switch and collect existing evidence to facilitate research and implementation of effective MDUIs.Postprin
Comparative Analysis of Mobile 3D Scanning Technologies for Design, Manufacture of Interior and Exterior Tensile Material Structures and Canvasman Ltd. Case Study
This report aimed to investigate mobile 3D Scanning technologies to improve the 3D data capture and efficiency into Canvasman’s CAD design and manufacturing processes with focus on accurate resolution. The Santander funded Collaborative Venture Fund (CVF) project has provided research, survey data, evaluation and analysis for Canvasman Ltd. on 3D portable scanning hardware and software. The project solutions recommended in this report offers impartial product information on the current appropriate 3D scanning technology that potentially could improve efficiency of data capturing, design and manufacture of interior and exterior spaces, boats, vehicles and other similar constructions for creating and installing flexible coverings and indoor and outdoor structures
The threshold of the real: A site for participatory resistance in Blast Theory's Uncle Roy all around you (2003)
This article examines the collision of virtual and real spaces through simultaneous live and online play in Uncle Roy All Around You, and how this disruption of immersion is used to expose the habitual engagements associated with the digital interface. The nature of the participants' immersion and the subsequent reintegration into the real will be explored, before attempting to articulate what defines this piece as politically resistant, through discussion of a self reflexive participation, which undermines what Baudrillard terms the 'simulated response' (Baudrillard 1985/1988 p.216
It’s not the model that doesn’t fit, it’s the controller! The role of cognitive skills in understanding the links between natural mapping, performance, and enjoyment of console video games
This study examines differences in performance, frustration, and game ratings of individuals playing first person shooter video games using two different controllers (motion controller and a traditional, pushbutton controller) in a within-subjects, randomized order design. Structural equation modeling was used to demonstrate that cognitive skills such as mental rotation ability and eye/hand coordination predicted performance for both controllers, but the motion control was significantly more frustrating. Moreover, increased performance was only related to game ratings for the traditional controller input. We interpret these data as evidence that, contrary to the assumption that motion controlled interfaces are more naturally mapped than traditional push-button controllers, the traditional controller was more naturally mapped as an interface for gameplay
Reflectance Hashing for Material Recognition
We introduce a novel method for using reflectance to identify materials.
Reflectance offers a unique signature of the material but is challenging to
measure and use for recognizing materials due to its high-dimensionality. In
this work, one-shot reflectance is captured using a unique optical camera
measuring {\it reflectance disks} where the pixel coordinates correspond to
surface viewing angles. The reflectance has class-specific stucture and angular
gradients computed in this reflectance space reveal the material class.
These reflectance disks encode discriminative information for efficient and
accurate material recognition. We introduce a framework called reflectance
hashing that models the reflectance disks with dictionary learning and binary
hashing. We demonstrate the effectiveness of reflectance hashing for material
recognition with a number of real-world materials
PainDroid: An android-based virtual reality application for pain assessment
Earlier studies in the field of pain research suggest that little efficient intervention currently exists in response to the exponential increase in the prevalence of pain. In this paper, we present an Android application (PainDroid) with multimodal functionality that could be enhanced with Virtual Reality (VR) technology, which has been designed for the purpose of improving the assessment of this notoriously difficult medical concern. Pain- Droid has been evaluated for its usability and acceptability with a pilot group of potential users and clinicians, with initial results suggesting that it can be an effective and usable tool for improving the assessment of pain. Participant experiences indicated that the application was easy to use and the potential of the application was similarly appreciated by the clinicians involved in the evaluation. Our findings may be of considerable interest to healthcare providers, policy makers, and other parties that might be actively involved in the area of pain and VR research
Dynamic Illumination for Augmented Reality with Real-Time Interaction
Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
- …