159,677 research outputs found
Emotional Qualities of VR Space
The emotional response a person has to a living space is predominantly
affected by light, color and texture as space-making elements. In order to
verify whether this phenomenon could be replicated in a simulated environment,
we conducted a user study in a six-sided projected immersive display that
utilized equivalent design attributes of brightness, color and texture in order
to assess to which extent the emotional response in a simulated environment is
affected by the same parameters affecting real environments. Since emotional
response depends upon the context, we evaluated the emotional responses of two
groups of users: inactive (passive) and active (performing a typical daily
activity). The results from the perceptual study generated data from which
design principles for a virtual living space are articulated. Such a space, as
an alternative to expensive built dwellings, could potentially support new,
minimalist lifestyles of occupants, defined as the neo-nomads, aligned with
their work experience in the digital domain through the generation of emotional
experiences of spaces. Data from the experiments confirmed the hypothesis that
perceivable emotional aspects of real-world spaces could be successfully
generated through simulation of design attributes in the virtual space. The
subjective response to the virtual space was consistent with corresponding
responses from real-world color and brightness emotional perception. Our data
could serve the virtual reality (VR) community in its attempt to conceive of
further applications of virtual spaces for well-defined activities.Comment: 12 figure
Recommended from our members
Cue combination for 3D location judgements
Cue combination rules have often been applied to the perception of surface shape but not to judgements of object location. Here, we used immersive virtual reality to explore the relationship between different cues to distance. Participants viewed a virtual scene and judged the change in distance of an object presented in two intervals, where the scene changed in size between intervals (by a factor of between 0.25 and 4). We measured thresholds for detecting a change in object distance when there were only 'physical' (stereo and motion parallax) or 'texture-based' cues (independent of the scale of the scene) and used these to predict biases in a distance matching task. Under a range of conditions, in which the viewing distance and position of the tarte relative to other objects was varied, the ration of 'physical' to 'texture-based' thresholds was a good predictor of biases in the distance matching task. The cue combination approach, which successfully accounts for our data, relies on quite different principles from those underlying geometric reconstruction
GPU-based Image Analysis on Mobile Devices
With the rapid advances in mobile technology many mobile devices are capable
of capturing high quality images and video with their embedded camera. This
paper investigates techniques for real-time processing of the resulting images,
particularly on-device utilizing a graphical processing unit. Issues and
limitations of image processing on mobile devices are discussed, and the
performance of graphical processing units on a range of devices measured
through a programmable shader implementation of Canny edge detection.Comment: Proceedings of Image and Vision Computing New Zealand 201
Evaluation of optimisation techniques for multiscopic rendering
A thesis submitted to the University of Bedfordshire in fulfilment of the requirements for the degree of Master of Science by ResearchThis project evaluates different performance optimisation techniques applied to stereoscopic and multiscopic rendering for interactive applications. The artefact
features a robust plug-in package for the Unity game engine. The thesis provides background information for the performance optimisations, outlines all the findings, evaluates the optimisations and provides suggestions for future work.
Scrum development methodology is used to develop the artefact and quantitative research methodology is used to evaluate the findings by measuring performance.
This project concludes that the use of each performance optimisation has specific use case scenarios in which performance benefits. Foveated rendering provides
greatest performance increase for both stereoscopic and multiscopic rendering but is also more computationally intensive as it requires an eye tracking solution.
Dynamic resolution is very beneficial when overall frame rate smoothness is needed and frame drops are present. Depth optimisation is beneficial for vast open environments but can lead to decreased performance if used inappropriately
Haptic Hybrid Prototyping (HHP): An AR Application for Texture Evaluation with Semantic Content in Product Design
The manufacture of prototypes is costly in economic and temporal terms and in order to carry this out it is necessary to accept certain deviations with respect to the final finishes. This article proposes haptic hybrid prototyping, a haptic-visual product prototyping method created to help product design teams evaluate and select semantic information conveyed between product and user through texturing and ribs of a product in early stages of conceptualization. For the evaluation of this tool, an experiment was realized in which the haptic experience was compared during the interaction with final products and through the HHP. As a result, it was observed that the answers of the interviewees coincided in both situations in 81% of the cases. It was concluded that the HHP enables us to know the semantic information transmitted through haptic-visual means between product and user as well as being able to quantify the clarity with which this information is transmitted. Therefore, this new tool makes it possible to reduce the manufacturing lead time of prototypes as well as the conceptualization phase of the product, providing information on the future success of the product in the market and its economic return
A surgical system for automatic registration, stiffness mapping and dynamic image overlay
In this paper we develop a surgical system using the da Vinci research kit
(dVRK) that is capable of autonomously searching for tumors and dynamically
displaying the tumor location using augmented reality. Such a system has the
potential to quickly reveal the location and shape of tumors and visually
overlay that information to reduce the cognitive overload of the surgeon. We
believe that our approach is one of the first to incorporate state-of-the-art
methods in registration, force sensing and tumor localization into a unified
surgical system. First, the preoperative model is registered to the
intra-operative scene using a Bingham distribution-based filtering approach. An
active level set estimation is then used to find the location and the shape of
the tumors. We use a recently developed miniature force sensor to perform the
palpation. The estimated stiffness map is then dynamically overlaid onto the
registered preoperative model of the organ. We demonstrate the efficacy of our
system by performing experiments on phantom prostate models with embedded stiff
inclusions.Comment: International Symposium on Medical Robotics (ISMR 2018
De/construction sites: Romans and the digital playground
The Roman world as attested to archaeologically and as interacted with today has its expression in a great many computational and other media. The place of visualisation within this has been paramount. This paper argues that the process of digitally constructing the Roman world and the exploration of the resultant models are useful methods for interpretation and influential factors in the creation of a popular Roman aesthetic. Furthermore, it suggests ways in which novel computational techniques enable the systematic deconstruction of such models, in turn re-purposing the many extant representations of Roman architecture and material culture
Dynamic Illumination for Augmented Reality with Real-Time Interaction
Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
Recommended from our members
Degraded reality: Using VR/AR to simulate visual impairments
The effects of eye disease cannot be depicted accurately using traditional media. Consequently, public understanding of eye disease is often poor. We present a VR/AR system for simulating common visual impairments, including disability glare, spatial distortions (Metamorphopsia), the selective blurring and filling-in of information across the visual field, and color vision deficits. Unlike most existing simulators, the simulations are informed by patients' self-reported symptoms, can be quantitatively manipulated to provide custom disease profiles, and support gaze-contingent presentation (i.e., when using a VR/AR headset that contains eye-tracking technology, such as the Fove0). Such a simulator could be used as a teaching/empathy aid, or as a tool for evaluating the accessibility of new products and environments
- …