630 research outputs found
Spatial Interaction for Immersive Mixed-Reality Visualizations
Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics.
Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis.
Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis.
Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research.
One of the resulting challenges, however, is the design of user interaction for these often complex systems.
In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions:
1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them?
2) How does spatial interaction benefit these visualizations and how should such interactions be designed?
3) How can spatial interaction in these immersive environments be analyzed and evaluated?
To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts.
For the second question, I study how spatial interaction in particular can help to explore data in mixed reality.
There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels.
Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights.
Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse.
Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen.
Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient.
Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat.
Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme.
In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche:
1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden?
2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden?
3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden?
Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren.
Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann.
Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels.
Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann.
Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können
Social Virtual Reality Platform Comparison and Evaluation Using a Guided Group Walkthrough Method
As virtual reality (VR) headsets become more commercially accessible, a range of social platforms have been developed that exploit the immersive nature of these systems. There is a growing interest in using these platforms in social and work contexts, but relatively little work into examining the usability choices that have been made. We developed a usability inspection method based on cognitive walkthrough that we call guided group walkthrough. Guided group walkthrough is applied to existing social VR platforms by having a guide walk the participants through a series of abstract social tasks that are common across the platforms. Using this method we compared six social VR platforms for the Oculus Quest. After constructing an appropriate task hierarchy and walkthrough question structure for social VR, we ran several groups of participants through the walkthrough process. We undercover usability challenges that are common across the platforms, identify specific design considerations and comment on the utility of the walkthrough method in this situation
A novel interface for first person shooter games on personal digital assistant devices
Includes abstract.Includes bibliographical references (leaves 71-73).The main aim of this study is to enhance the playability of games on current standard PDA devices. The newly designed interface more effectively leverages current well-established devices, which solves the problem of rapidly and accurately executing a large number of gaming commands. The outcomes of this research are beneficial for interface design of mobile applications
MEVA - An interactive visualization application for validation of multifaceted meteorological data with multiple 3D devices
To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work
Recommended from our members
The role of metaphor in user interface design
The thesis discusses the question of how unfamiliar computing systems, particularly those with graphical user interfaces, are learned and used. In particular, the approach of basing the design and behaviour of on-screen objects in the system's model world on a coherent theme and employing a metaphor is explored. The drawbacks, as well as the advantages, of this approach are reviewed and presented. The use of metaphors is also contrasted with other forms of users' mental models of interactive systems, and the need to provide a system image from which useful mental models can be developed is presented.
Metaphors are placed in the context of users' understanding of interactive systems and novel application is made of the Qualitative Process Theory (QPT) qualitative reasoning model to reason about the behaviour of on-screen objects, the underlying system functionality, and the relationship between the two. This analysis supports reevaluation of the domains between which user interface metaphors are said to form mappings. A novel user interface design, entitled Medusa, that adopts guidelines for the design of metaphor-based systems, and for helping the user develop successful mental models, based on the QPT analysis and an empirical study of a popular metaphor-based system, is described. The first Medusa design is critiqued using well-founded usability inspection method.
Employing the Lakoff/Johnson theory, a revised version of the Medusa user interface is described that derives its application semantics and dialogue structures from the entailments of the knowledge structures that ground understanding of the interface metaphor and that capture notions of embodiment in interaction with computing devices that QPT descriptions cannot. Design guidelines from influential existing work, and new methods of reasoning about metaphor-based designs, are presented with a number of novel graphical user interface designs intended to overcome the failings of existing systems and design approaches
Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies
Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work.
This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains.
irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls).
These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones.
The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC
Recommended from our members
An Investigation Into The Accessibility Of Massive Open Online Courses (MOOCs)
Massive Open Online Courses (MOOCs) are an evolution of open online learning that enables people to study online and for little or no cost. MOOCs can provide learners with the flexibility to learn, opportunities for social learning, and the chance to gain new skills and knowledge. While MOOCs have the potential to also bring these benefits to disabled learners, there is little understanding of how accessibility is embedded in the creation of MOOCs. The goal of this research has been to understand the accessibility barriers in MOOCs and to develop processes to identify and address those barriers.
In the extant literature, the expectations of disabled learners when they take up MOOCs are not discussed and studies on MOOCs that report demographic data of learners do not consider disabled learners. However, disabled learners can face difficulties in accessing MOOCs, and certain learning designs of MOOCs may affect their engagement, causing them to miss out on opportunities offered by MOOCs. Technologies and the learning design approaches for MOOCs need to be as accessible as possible, so that learners can use MOOCs in a range of contexts, including via assistive technologies.
This research has investigated the current state of accessibility in MOOCs. It has involved the following:
Interviews with 26 MOOC providers; including software developers, accessibility managers, inclusion designers, instructional designers, course editors and learning media developers;
Comparative quantitative survey data involving disabled and non-disabled learners participating in 14 MOOCs;
Interviews with 15 disabled learners which have captured their experiences; and
An accessibility audit was devised and then used to evaluate MOOCs from 4 major platforms: FutureLearn, edX, Coursera and Canvas. This audit comprises 4 components: technical accessibility, user experience (UX), quality and learning design; 10 experts were involved in its design and validation.
This research programme has yielded an understanding of how MOOC providers cater for disabled learners, the motivations of disabled learners when taking part in MOOCs, and how MOOCs should be designed to be accessible for disabled learners. A range of barriers to accessibility in MOOCs have been identified, and an accessibility audit for MOOCs has been proposed.
An open online learning environment should take into account learners’ abilities, learning goals, where learning takes place, and the different devices learners use. The research outcomes will be beneficial to MOOC providers to support the accessible design of MOOCs, including the educational resources and the platforms where the MOOCs are hosted. The ultimate beneficiaries of this research project are MOOC learners because accessible MOOCs will help support their lifelong learning and provide re-skilling opportunities
- …