9 research outputs found
Exploration of multiple pathways for the development of immersive virtual reality environments
The focus of this thesis is the study and recommendation of optimal techniques for developing immersive virtual environments for generic applications. The overarching objective is to ensure that virtual environments can be created and deployed, rapidly and accurately, using commercial off-the-shelf software. Specific subjective and objective criteria have been employed to determine trade-offs between multiple pathways for designing such environments and specific recommendations are made for the applicability of each. The efficacy of the techniques developed as part of this research work has been demonstrated by applying them to three widely differing areas - visualizing arbitrary 2D surface data, synthesis of particle aggregate models from computed tomography and simulation of NASA rocket engine test stands.
The objectives of this thesis were obtained by an examination of the current algorithms and software in use for the development of virtual environments. From these currently used methods, general methods were defined. The expansion of these general methods to include the inputs and situations of common applications, allowed for the development of methods for real-world examples. Results were obtained by evaluating these methods against defined measurement criteria. These criteria measured the effectiveness of these methods for increasing the value of virtual reality, while reducing the cost.
In this thesis, two virtual environment platforms (vGeo® and Vizard®) were used to develop three applications. These applications were a surface plot, particle visualizations and test stand simulations. In most cases, the results found the open-ended Vizard® to be the better platform. vGeo®, a platform designed for data visualization, worked well for basic data visualization, but was not as effective as Vizard® for developing more complex visualization. This thesis found that in most cases, an open-ended development platform, with functionality for rapid development is ideal. These methods and evaluations can be applied to a more diverse set of application and datasets to build development platforms that are even more efficient
You're the Camera! Mapping Physical Movements to Transitioning Between Environments in Virtual Reality
Virtual reality gives users the ability to experience amazing and unusual experiences in an immersive environment. As new media are developed, designers tend to remediate several design aspects from media that came before it. Video games remediated techniques from cinema, and cinema remediated techniques from plays and musicals. However, not everything that remediates from past digital media is appropriate for virtual reality. Consequently, there are several areas in virtual reality design that warrant scientific investigation in that regard. The research question asked in this thesis specifically addresses transitioning between environments: when transitioning in a virtual world, will camera movements made simultaneously with movements from the user produce a preferred scene transition experience than with virtual camera movement alone? Multiple research studies have shown that physical movement in a virtual environment supports a strong immersive and presence induced experience. This thesis uses a within-subject experimental design where participants were tasked with transitioning between two different environments; once using physical motion to trigger the transition and once with using physical movement to pass a predetermined threshold and allowing the game to finish the transition for them. Results showed an overwhelming preference for the Non-Direct Version of this thesis game in nearly every regard.M.S., Digital Media -- Drexel University, 201
Computational immersive displays
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 77-79).Immersion is an oft-quoted but ill-defined term used to describe a viewer or participant's sense of engagement with a visual display system or participatory media. Traditionally, advances in immersive quality came at the high price of ever-escalating hardware requirements and computational budgets. But what if one could increase a participant's sense of immersion, instead, by taking advantage of perceptual cues, neuroprocessing, and emotional engagement while adding only a small, yet distinctly targeted, set of advancements to the display hardware? This thesis describes three systems that introduce small amounts of computation to the visual display of information in order to increase the viewer's sense of immersion and participation. It also describes the types of content used to evaluate the systems, as well as the results and conclusions gained from small user studies. The first system, Infinity-by-Nine, takes advantage of the dropoff in peripheral visual acuity to surround the viewer with an extended lightfield generated in realtime from existing video content. The system analyzes an input video stream and outpaints a low-resolution, pattern-matched lightfield that simulates a fully immersive environment in a computationally efficient way. The second system, the Narratarium, is a context-aware projector that applies pattern recognition and natural language processing to an input such as an audio stream or electronic text to generate images, colors, and textures appropriate to the narrative or emotional content. The system outputs interactive illustrations and audio projected into spaces such as children's rooms, retail settings, or entertainment venues. The final system, the 3D Telepresence Chair, combines a 19th-century stage illusion known as Pepper's Ghost with an array of micro projectors and a holographic diffuser to create an autostereoscopic representation of a remote subject with full horizontal parallax. The 3D Telepresence Chair is a portable, self-contained apparatus meant to enhance the experience of teleconferencing.by Daniel E. Novy.S.M
Phobi: Combating stage fright through a virtual reality game
Stage fright is often considered to be the most feared fear in the western world. Therapeutic treatments that make use of virtual reality have been successful at alleviating stage fright. This is because virtual reality possesses the ability to simulate an artificial environment that allows its users to feel as if they are on a stage. However, many of these treatments are expensive and not easily accessible to an average person. On the other hand, virtual reality games are easier to download while also helping their players practice and motivate them to overcome their obstacles. Using this approach, I am designing a virtual reality game titled Phobi, that is aimed at alleviating stage fright and improving its players’ public speaking skills. Phobi makes use of design guidelines that are established over the course of this research. Eventually, Phobi is tested out with a set of participants and its advantages as well as limitations are discerned
Recommended from our members
Technological framework for ubiquitous interactions using context–aware mobile devices
This report presents research and development of dedicated system architecture, designed to enable its users to interact with each other as well as to access information on Points of Interest that exist in their immediate environment. This is accomplished through managing personal preferences and contextual information in a distributed manner and in real-time. The advantage of this system architecture is that it uses mobile devices, heterogeneous sensors and a selection of user interface paradigms to produce a sociotechnical framework to enhance the perception of the environment and promote intuitive interactions. The thrust of the work has been on software development and component integration. Iterative prototyping was adopted as a development method in order to effectively implement the users’ feedback and establish a platform for collaboration that closely meets the requirements and aids their decision-making process. The requirement acquisition was followed by the system-modelling phase in order to produce a robust software prototype. The implementation includes component-based development and extensive use of design patterns over native programming. Conclusively, the software product has become the means to evaluate differences in the use of mixed reality technologies in a ubiquitous scenario.
The prototype can query a number of context sources such as sensors, or details of the personal profile, to acquire relevant data. The data (and metadata) is stored in opensource structures, so that they are accessible at every layer of the system architecture and at any time. By proactively processing the acquired context, the system can assist the users in their tasks (e.g. navigation) without explicit input – e.g. by simply creating a gesture with the device. However, advanced interaction with the application via the user interface is available for requests that are more complex.
Representations of the real world objects, their spatial relations and other captured features of interest are visualised on scalable interfaces, ranging from 2D to 3D models and from photorealism to stylised clues and symbols. Two principal modes of operation have been implemented; one, using geo-referenced virtual reality models of the environment, updated in real time, and second, using the overlay of descriptive annotations and graphics on the video images of the surroundings, captured by a video camera. The latter is referred to as augmented reality.
The continuous feed of the device position and orientation data, from the GPS receiver and the digital compass, into the application, makes the framework fit for use in unknown environments and therefore suitable for ubiquitous operation. This is one of the novelties of the proposed framework, because it enables a whole range of social, peer-to-peer interactions to take place. The scenarios of how the system could be employed to pursue these remote interactions and collaborative efforts on mobile devices are addressed in the context of urban navigation. The conceptual design and implementation of the novel location and orientation based algorithm for mobile AR are presented in detail. The system is, however, multifaceted and capable of supporting peer-to-peer exchange of information in a pervasive fashion, usable in various contexts. The modalities of these interactions are explored and laid out in several scenarios, but particularly in the context of user adoption. Two evaluation tasks took place. The preliminary evaluation examined certain aspects that influence user interaction while being immersed in a virtual environment, whereas the second summative evaluation compared the utility and certain usability aspects of the AR and VR interfaces