26 research outputs found

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    Synchronized wayfinding on multiple consecutively situated public displays

    No full text
    Our built environment is becoming increasingly equipped with public displays, many of which are networked and share the same physical location. In spite of their ubiquitous presence and inherent dynamic functionalities, the presence of multiple public displays is often not exploited, such as to solve dynamic wayfinding challenges in crowded or complex spaces. Hence, we have studied how signage can be animated onto multiple consecutively located public displays in combination with other content. This paper reports on an in-the-wild evaluation study in a real-world, metropolitan train station in order to identify the most promising design strategies to: 1) provide the notion of spatial directionality by way of animation; 2) support concurrent viewing of wayfinding with other content types, and 3) convey a sense of urgency. Our results indicate that spatially distributed animated patterns may be used to convey directions under specific spatial conditions and content combination strategies, yet their impact is limited and highly dependent on the visibility of the animated patterns on individual screens and across multiple displays

    User-centred design of smartphone augmented reality in urban tourism context.

    Get PDF
    Exposure to new and unfamiliar environments is a necessary part of nearly everyone’s life. Effective communication of location-based information through various locationbased service interfaces (LBSIs) became a key concern for cartographers, geographers, human-computer interaction (HCI) and professional designers alike. Much attention is directed towards Augmented Reality (AR) interfaces. Smartphone AR browsers deliver information about physical objects through spatially registered virtual annotations and can function as an interface to (geo)spatial and attribute data. Such applications have considerable potential for tourism. Recently, the number of studies discussing the optimal placement and layout of AR content increased. Results, however, do not scale well to the domain of urban tourism, because: 1) in any urban destination, many objects can be augmented with information; 2) each object can be a source of a substantial amount of information; 3) the incoming video feed is visually heterogeneous and complex; 4) the target user group is in an unfamiliar environment; 5) tourists have different information needs from urban residents. Adopting a User-Centred Design (UCD) approach, the main aim of this research project was to make a theoretical contribution to design knowledge relevant to effective support for (geo)spatial knowledge acquisition in unfamiliar urban environments. The research activities were divided in four (iterative) stages: (1) theoretical, (2) requirements analysis, (3) design and (4) evaluation. After critical analysis of existing literature on design of AR, the theoretical stage involved development of a theoretical user-centred design framework, capturing current knowledge in several relevant disciplines. In the second stage, user requirements gathering was carried out through a field quasi experiment where tourists were asked to use AR browsers in an unfamiliar for them environment. Qualitative and quantitative data were used to identify key relationships, extend the user-centred design framework and generate hypotheses about effective and efficient design. In the third stage, several design alternatives were developed and used to test the hypotheses through a laboratory-based quantitative study with 90 users. The results indicate that information acquisition through AR browsers is more effective and efficient if at least one element within the AR annotation matches the perceived visual characteristics or inferred non-visual attributes of target physical objects. Finally, in order to ensure that all major constructs and relationships are identified, qualitative evaluation of AR annotations was carried out by HCI and GIS domain-expert users in an unfamiliar urban tourism context. The results show that effective information acquisition in urban tourism context will depend on the visual design and delivered content through AR annotations for both visible and non-visible points of interest. All results were later positioned within existing theory in order to develop a final conceptual user-centred design framework that shifts the perspective towards a more thorough understanding of the overall design space for mobile AR interfaces. The dissertation has theoretical, methodological and practical implications. The main theoretical contribution of this thesis is to Information Systems Design Theory. The developed framework provides knowledge regarding the design of mobile AR. It can be used for hypotheses generation and further empirical evaluations of AR interfaces that facilitate knowledge acquisition in different types of environments and for different user groups. From a methodological point of view, the described userbased studies showcase how a UCD approach could be applied to design and evaluation of novel smartphone interfaces within the travel and tourism domain. Within industry the proposed framework could be used as a frame of reference by designers and developers who are not familiar with knowledge acquisition in urban environments and/or mobile AR interfaces

    Accessibility of Health Data Representations for Older Adults: Challenges and Opportunities for Design

    Get PDF
    Health data of consumer off-the-shelf wearable devices is often conveyed to users through visual data representations and analyses. However, this is not always accessible to people with disabilities or older people due to low vision, cognitive impairments or literacy issues. Due to trade-offs between aesthetics predominance or information overload, real-time user feedback may not be conveyed easily from sensor devices through visual cues like graphs and texts. These difficulties may hinder critical data understanding. Additional auditory and tactile feedback can also provide immediate and accessible cues from these wearable devices, but it is necessary to understand existing data representation limitations initially. To avoid higher cognitive and visual overload, auditory and haptic cues can be designed to complement, replace or reinforce visual cues. In this paper, we outline the challenges in existing data representation and the necessary evidence to enhance the accessibility of health information from personal sensing devices used to monitor health parameters such as blood pressure, sleep, activity, heart rate and more. By creating innovative and inclusive user feedback, users will likely want to engage and interact with new devices and their own data
    corecore