3,342 research outputs found
Interactive Narrative in Virtual Reality
Interactive fiction is a literary genre that is rapidly gaining popularity.
In this genre, readers are able to explicitly take actions in order to guide
the course of the story. With the recent popularity of narrative focused games,
we propose to design and develop an interactive narrative tool for content
creators. In this extended abstract, we show how we leverage this interactive
medium to present a tool for interactive storytelling in virtual reality. Using
a simple markup language, content creators and researchers are now able to
create interactive narratives in a virtual reality environment. We further
discuss the potential future directions for a virtual reality storytelling
engine
Handheld Augmented Reality in education
[ES] En esta tesis llevamos a cabo una investigación en Realidad Aumentada (AR) orientada a entornos de
aprendizaje, donde la interacción con los estudiantes se realiza con dispositivos de mano. A través de
tres estudios exploramos las respuestas en el aprendizaje que se pueden obtener usando AR en
dispositivos de mano, en un juego que desarrollamos para niños. Exploramos la influencia de AR en
Entornos de Aprendizaje de Realidad Virtual (VRLE) y las ventajas que pueden aportar, así como sus
límites. También probamos el juego en dos dispositivos de mano distintos (un smartphone y un Tablet
PC) y presentamos las conclusiones comparándolos en torno a la satisfación y la interacción.
Finalmente, comparamos interfaces táctiles y tangibles en aplicaciones de AR para niños bajo una
perspectiva en Interacción Hombre-Máquina.[EN] In this thesis we conduct a research in Augmented Reality (AR) aimed to learning environments, where
the interaction with the students is carried out using handheld devices. Through three studies we
explore the learning outcomes that can be obtained using handheld AR in a game that we developed
for children. We explored the influence of AR in Virtual Reality Learning Environments (VRLE) and the
advantages that can involve, as well as the limits. We also tested the game in two different handheld
devices (a smartphone and a Tablet PC) and present the conclusions comparing them concerning
satisfaction and interaction. Finally, we compare the use tactile and tangible user interfaces in AR
applications for children under a Human-Computer Interaction perspective.González Gancedo, S. (2012). Handheld Augmented Reality in education. http://hdl.handle.net/10251/17973Archivo delegad
A Utility Framework for Selecting Immersive Interactive Capability and Technology for Virtual Laboratories
There has been an increase in the use of virtual reality (VR) technology in the education community since VR is emerging as a potent educational tool that offers students with a rich source of educational material and makes learning exciting and interactive. With a rise of popularity and market expansion in VR technology in the past few years, a variety of consumer VR electronics have boosted educators and researchers’ interest in using these devices for practicing engineering and science laboratory experiments. However, little is known about how such devices may be well-suited for active learning in a laboratory environment. This research aims to address this gap by formulating a utility framework to help educators and decision-makers efficiently select a type of VR device that matches with their design and capability requirements for their virtual laboratory blueprint. Furthermore, a framework use case is demonstrated by not only surveying five types of VR devices ranging from low-immersive to full-immersive along with their capabilities (i.e., hardware specifications, cost, and availability) but also considering the interaction techniques in each VR device based on the desired laboratory task. To validate the framework, a research study is carried out to compare these five VR devices and investigate which device can provide an overall best-fit for a 3D virtual laboratory content that we implemented based on the interaction level, usability and performance effectiveness
Computational interaction techniques for 3D selection, manipulation and navigation in immersive VR
3D interaction provides a natural interplay for HCI. Many techniques involving diverse sets of hardware and software components have been proposed, which has generated an explosion of Interaction Techniques (ITes), Interactive Tasks (ITas) and input devices, increasing thus the heterogeneity of tools in 3D User Interfaces (3DUIs). Moreover, most of those techniques are based on general formulations that fail in fully exploiting human capabilities for interaction. This is because while 3D interaction enables naturalness, it also produces complexity and limitations when using 3DUIs.
In this thesis, we aim to generate approaches that better exploit the high potential human capabilities for interaction by combining human factors, mathematical formalizations and computational methods. Our approach is focussed on the exploration of the close coupling between specific ITes and ITas while addressing common issues of 3D interactions.
We specifically focused on the stages of interaction within Basic Interaction Tasks (BITas) i.e., data input, manipulation, navigation and selection. Common limitations of these tasks are: (1) the complexity of mapping generation for input devices, (2) fatigue in mid-air object manipulation, (3) space constraints in VR navigation; and (4) low accuracy in 3D mid-air selection.
Along with two chapters of introduction and background, this thesis presents five main works. Chapter 3 focusses on the design of mid-air gesture mappings based on human tacit knowledge. Chapter 4 presents a solution to address user fatigue in mid-air object manipulation. Chapter 5 is focused on addressing space limitations in VR navigation. Chapter 6 describes an analysis and a correction method to address Drift effects involved in scale-adaptive VR navigation; and Chapter 7 presents a hybrid technique 3D/2D that allows for precise selection of virtual objects in highly dense environments (e.g., point clouds). Finally, we conclude discussing how the contributions obtained from this exploration, provide techniques and guidelines to design more natural 3DUIs
A new method for interacting with multi-window applications on large, high resolution displays
Physically large display walls can now be constructed using off-the-shelf computer hardware. The high resolution
of these displays (e.g., 50 million pixels) means that a large quantity of data can be presented to users, so the
displays are well suited to visualization applications. However, current methods of interacting with display walls
are somewhat time consuming. We have analyzed how users solve real visualization problems using three desktop
applications (XmdvTool, Iris Explorer and Arc View), and used a new taxonomy to classify users’ actions and
illustrate the deficiencies of current display wall interaction methods. Following this we designed a novel methodfor interacting with display walls, which aims to let users interact as quickly as when a visualization application is used on a desktop system. Informal feedback gathered from our working prototype shows that interaction is both fast and fluid
Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments
Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen.
Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen.
Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern
Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers.
Virtual Reality (VR) has the potential to transform knowledge work. One
advantage of VR knowledge work is that it allows extending 2D displays into the
third dimension, enabling new operations, such as selecting overlapping objects
or displaying additional layers of information. On the other hand, mobile
knowledge workers often work on established mobile devices, such as tablets,
limiting interaction with those devices to a small input space. This challenge
of a constrained input space is intensified in situations when VR knowledge
work is situated in cramped environments, such as airplanes and touchdown
spaces.
In this paper, we investigate the feasibility of interacting jointly between
an immersive VR head-mounted display and a tablet within the context of
knowledge work. Specifically, we 1) design, implement and study how to interact
with information that reaches beyond a single physical touchscreen in VR; 2)
design and evaluate a set of interaction concepts; and 3) build example
applications and gather user feedback on those applications.Comment: 10 pages, 8 figures, ISMAR 202
Integrating Olfaction in a Robotic Telepresence Loop
In this work we propose enhancing a typical
robotic telepresence architecture by considering olfactory and wind flow information in addition to the common audio and video channels. The objective is to expand the range of applications where robotics telepresence can be applied, including those related to the detection of volatile chemical substances (e.g. land-mine detection, explosive deactivation, operations
in noxious environments, etc.). Concretely, we analyze how the sense of smell can be integrated in the telepresence loop, covering the digitization of the gases and wind flow
present in the remote environment, the transmission through
the communication network, and their display at the user location. Experiments under different environmental conditions are presented to validate the proposed telepresence system when
localizing a gas emission leak at the remote environment.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
- …