3,054 research outputs found

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können

    Interactive exploration of historic information via gesture recognition

    Get PDF
    Developers of interactive exhibits often struggle to �nd appropriate input devices that enable intuitive control, permitting the visitors to engage e�ectively with the content. Recently motion sensing input devices like the Microsoft Kinect or Panasonic D-Imager have become available enabling gesture based control of computer systems. These devices present an attractive input device for exhibits since the user can interact with their hands and they are not required to physically touch any part of the system. In this thesis we investigate techniques to enable the raw data coming from these types of devices to be used to control an interactive exhibit. Object recognition and tracking techniques are used to analyse the user's hand where movement and clicks are processed. To show the e�ectiveness of the techniques the gesture system is used to control an interactive system designed to inform the public about iconic buildings in the centre of Norwich, UK. We evaluate two methods of making selections in the test environment. At the time of experimentation the technologies were relatively new to the image processing environment. As a result of the research presented in this thesis, the techniques and methods used have been detailed and published [3] at the VSMM (Virtual Systems and Multimedia 2012) conference with the intention of further forwarding the area

    Examining the use of visualisation methods for the design of interactive systems

    Get PDF
    Human-Computer Interaction (HCI) design has historically involved people from different fields. Designing HCI systems with people of varying background and expertise can bring different perspectives and ideas, but discipline-specific language and design methods can hinder such collaborations. The application of visualisation methods is a way to overcome these challenges, but to date selection tools tend to focus on a facet of HCI design methods and no research has been attempted to assemble a collection of HCI visualisation methods. To fill this gap, this research seeks to establish an inventory of HCI visualisation methods and identify ways of selecting amongst them. Creating the inventory of HCI methods would enable designers to discover and learn about methods that they may not have used before or be familiar with. Categorising the methods provides a structure for new and experienced designers to determine appropriate methods for their design project. The aim of this research is to support designers in the development of Human-Computer Interaction (HCI) systems through better selection and application of visualisation methods. This is achieved through four phases. In the first phase, three case studies are conducted to investigate the challenges and obstacles that influence the choice of a design approach in the development of HCI systems. The findings from the three case studies helped to form the design requirements for a visualisation methods selection and application guide. In the second phase, the Guide is developed. The third phase aims to evaluate the Guide. The Guide is employed in the development of a serious training game to demonstrate its applicability. In the fourth phase, a user study was designed to evaluate the serious training game. Through the evaluation of the serious training game, the Guide is validated. This research has contributed to the knowledge surrounding visualisation tools used in the design of interactive systems. The compilation of HCI visualisation methods establishes an inventory of methods for interaction design. The identification of Selection Approaches brings together the ways in which visualisation methods are organised and grouped. By mapping visualisation methods to Selection Approaches, this study has provided a way for practitioners to select a visualisation method to support their design practice. The development of the Selection Guide provided five filters, which helps designers to identify suitable visualisation methods based on the nature of the design challenge. The development of the Application Guide presented the methodology of each visualisation method in a consistent format. This enables the ease of method comparison and to ensure there is comprehensive information for each method. A user study showing the evaluation of a serious training game is presented. Two learning objectives were identified and mapped to Bloom’s Taxonomy to advocate an approach for like-to-like comparison with future studies

    Eyes-Off Physically Grounded Mobile Interaction

    Get PDF
    This thesis explores the possibilities, challenges and future scope for eyes-off, physically grounded mobile interaction. We argue that for interactions with digital content in physical spaces, our focus should not be constantly and solely on the device we are using, but fused with an experience of the places themselves, and the people who inhabit them. Through the design, development and evaluation of a series ofnovel prototypes we show the benefits of a more eyes-off mobile interaction style.Consequently, we are able to outline several important design recommendations for future devices in this area.The four key contributing chapters of this thesis each investigate separate elements within this design space. We begin by evaluating the need for screen-primary feedback during content discovery, showing how a more exploratory experience can be supported via a less-visual interaction style. We then demonstrate how tactilefeedback can improve the experience and the accuracy of the approach. In our novel tactile hierarchy design we add a further layer of haptic interaction, and show how people can be supported in finding and filtering content types, eyes-off. We then turn to explore interactions that shape the ways people interact with aphysical space. Our novel group and solo navigation prototypes use haptic feedbackfor a new approach to pedestrian navigation. We demonstrate how variations inthis feedback can support exploration, giving users autonomy in their navigationbehaviour, but with an underlying reassurance that they will reach the goal.Our final contributing chapter turns to consider how these advanced interactionsmight be provided for people who do not have the expensive mobile devices that areusually required. We extend an existing telephone-based information service to support remote back-of-device inputs on low-end mobiles. We conclude by establishingthe current boundaries of these techniques, and suggesting where their usage couldlead in the future

    Chatbot-Based Natural Language Interfaces for Data Visualisation: A Scoping Review

    Full text link
    Rapid growth in the generation of data from various sources has made data visualisation a valuable tool for analysing data. However, visual analysis can be a challenging task, not only due to intricate dashboards but also when dealing with complex and multidimensional data. In this context, advances in Natural Language Processing technologies have led to the development of Visualisation-oriented Natural Language Interfaces (V-NLIs). In this paper, we carry out a scoping review that analyses synergies between the fields of Data Visualisation and Natural Language Interaction. Specifically, we focus on chatbot-based V-NLI approaches and explore and discuss three research questions. The first two research questions focus on studying how chatbot-based V-NLIs contribute to interactions with the Data and Visual Spaces of the visualisation pipeline, while the third seeks to know how chatbot-based V-NLIs enhance users' interaction with visualisations. Our findings show that the works in the literature put a strong focus on exploring tabular data with basic visualisations, with visual mapping primarily reliant on fixed layouts. Moreover, V-NLIs provide users with restricted guidance strategies, and few of them support high-level and follow-up queries. We identify challenges and possible research opportunities for the V-NLI community such as supporting high-level queries with complex data, integrating V-NLIs with more advanced systems such as Augmented Reality (AR) or Virtual Reality (VR), particularly for advanced visualisations, expanding guidance strategies beyond current limitations, adopting intelligent visual mapping techniques, and incorporating more sophisticated interaction methods
    • …
    corecore