285 research outputs found

    Context-Aware Software

    Get PDF
    With the advent of PDAs (Personal Digital Assistants), smart phones, and other forms of mobile and ubiquitous computers, our computing resources are increasingly moving off of our desktops and into our everyday lives. However, the software and user interfaces for these devices are generally very similar to that of their desktop counterparts, despite the radically different and dynamic environments that they face. We propose that to better assist their users, such devices should be able to sense, react to, and utilise, the user's current environment or context. That is, they should become context-aware. In this thesis we investigate context-awareness at three levels: user interfaces, applications, and supporting architectures/frameworks. To promote the use of context-awareness, and to aid its deployment in software, we have developed two supporting frameworks. The first is an application-oriented framework called stick-e notes. Based on an electronic version of the common Post-It Note, stick-e notes enable the attachment of any electronic resource (e.g. a text file, movie, Java program, etc.) to any type of context (e.g. location, temperature, time, etc.). The second framework we devised seeks to provide a more universal support for the capture, manipulation, and representation of context information. We call it the Context Information Service (CIS). It fills a similar role in context-aware software development as GUI libraries do in user interface development. Our applications research explored how context-awareness can be exploited in real environments with real users. In particular, we developed a suite of PDA-based context-aware tools for fieldworkers. These were used extensively by a group of ecologists in Africa to record observations of giraffe and rhinos in a remote Kenyan game reserve. These tools also provided the foundations for our HCI work, in which we developed the concept of the Minimal Attention User Interface (MAUI). The aim of the MAUI is to reduce the attention required by the user in operating a device by carefully selecting input/output modes that are harmonious to their tasks and environment. To evaluate our ideas and applications a field study was conducted in which over forty volunteers used our system for data collection activities over the course of a summer season at the Kenyan game reserve. The PDA-based tools were unanimously preferred to the paper-based alternatives, and the context-aware features were cited as particular reasons for preferring them. In summary, this thesis presents two frameworks to support context-aware software, a set of applications demonstrating how context-awareness can be utilised in the ''real world'', and a set of HCI guidelines and principles that help in creating user interfaces that fit to their context of use

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, führen zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhärent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natürliche Interaktionstechniken als hilfreich für die Datenanalyse erwiesen. Darüber hinaus spielt in solchen Anwendungsfällen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext für die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung geführt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion für diese oft komplexen Systeme. In meiner Dissertation beschäftige ich mich mit dieser Herausforderung, indem ich die Interaktion für immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von räumlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann räumliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen für immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. Für die zweite Frage untersuche ich, wie insbesondere die räumliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit räumlichen Geräten im Vergleich zur Touch-Eingabe, die Verwendung zusätzlicher mobiler Geräte als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darüber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie räumliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstützen können
    • …
    corecore