52 research outputs found

    Using activity transitions to trigger proactive messages

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 113-116).The proliferation of mobile devices and their tendency to present information proactively has led to an increase in device generated interruptions experienced by users. These interruptions are not confined to a particular physical space and are omnipresent. One possible strategy to lower the perceived burden of these interruptions is to cluster non-time-sensitive interruptions and deliver them during a physical activity transition. Since a user is already "interrupting" the current activity to engage in a new activity, the user will be more receptive to an interruption at this moment. This work compares the user's receptivity to an interruption triggered by an activity transition against a randomly generated interruption. A mobile computer system detects an activity transition with the use of wireless accelerometers. The results demonstrate that using this strategy reduces the perceived burden of the interruption.by Joyce Ho.M.Eng

    Physiologically attentive user interface for robot teleoperation: real time emotional state estimation and interface modification using physiology, facial expressions and eye movements

    Get PDF
    We developed a framework for Physiologically Attentive User Interfaces, to reduce the interaction gap between humans and machines in life critical robot teleoperations. Our system utilizes emotional state awareness capabilities of psychophysiology and classifies three emotional states (Resting, Stress, and Workload) by analysing physiological data along with facial expression and eye movement analysis. This emotional state estimation is then used to create a dynamic interface that updates in real time with respect to user’s emotional state. The results of a preliminary evaluation of the developed emotional state classifier for robot teleoperation are presented, along with its future possibilities are discussed.info:eu-repo/semantics/acceptedVersio

    Surrogate in-vehicle information systems and driver behaviour: Effects of visual and cognitive load in simulated rural driving

    Get PDF
    The underlying aim of HASTE, an EU FP5 project, is the development of a valid, cost-effective and reliable assessment protocol to evaluate the potential distraction of an in-vehicle information system on driving performance. As part of this development, the current study was performed to examine the systematic relationship between primary and secondary task complexity for a specific task modality in a particular driving environment. Two fundamentally distinct secondary tasks (or surrogate in-vehicle information systems, sIVIS) were developed: a visual search task, designed such that it only required visual processing/demand and an auditory continuous memory task, intended to cognitively load drivers without any visual stimulus. A high fidelity, fixed-base driving simulator was used to test 48 participants on a car following task. Virtual traffic scenarios varied in driving demand. Drivers compensated for both types of sIVIS by reducing their speed (this result was more prominent during interaction with the visual task). However, they seemed incapable of fully prioritising the primary driving task over either the visual or cognitive secondary tasks as an increase in sIVIS demand was associated with a reduction in driving performance: drivers showed reduced anticipation of braking requirements and shorter time-to-collision. These results are of potential interest to designers of in-vehicle systems

    Physiologically attentive user interface for improved robot teleoperation

    Get PDF
    User interfaces (UI) are shifting from being attention-hungry to being attentive to users’ needs upon interaction. Interfaces developed for robot teleoperation can be particularly complex, often displaying large amounts of information, which can increase the cognitive overload that prejudices the performance of the operator. This paper presents the development of a Physiologically Attentive User Interface (PAUI) prototype preliminary evaluated with six participants. A case study on Urban Search and Rescue (USAR) operations that teleoperate a robot was used although the proposed approach aims to be generic. The robot considered provides an overly complex Graphical User Interface (GUI) which does not allow access to its source code. This represents a recurring and challenging scenario when robots are still in use, but technical updates are no longer offered that usually mean their abandon. A major contribution of the approach is the possibility of recycling old systems while improving the UI made available to end users and considering as input their physiological data. The proposed PAUI analyses physiological data, facial expressions, and eye movements to classify three mental states (rest, workload, and stress). An Attentive User Interface (AUI) is then assembled by recycling a pre-existing GUI, which is dynamically modified according to the predicted mental state to improve the user's focus during mentally demanding situations. In addition to the novelty of the proposed PAUIs that take advantage of pre-existing GUIs, this work also contributes with the design of a user experiment comprising mental state induction tasks that successfully trigger high and low cognitive overload states. Results from the preliminary user evaluation revealed a tendency for improvement in the usefulness and ease of usage of the PAUI, although without statistical significance, due to the reduced number of subjects.info:eu-repo/semantics/acceptedVersio

    Self-adaptive unobtrusive interactions of mobile computing systems

    Full text link
    [EN] In Pervasive Computing environments, people are surrounded by a lot of embedded services. Since pervasive devices, such as mobile devices, have become a key part of our everyday life, they enable users to always be connected to the environment, making demands on one of the most valuable resources of users: human attention. A challenge of the mobile computing systems is regulating the request for users¿ attention. In other words, service interactions should behave in a considerate manner by taking into account the degree to which each service intrudes on the user¿s mind (i.e., the degree of obtrusiveness). The main goal of this paper is to introduce self-adaptive capabilities in mobile computing systems in order to provide non-disturbing interactions. We achieve this by means of an software infrastructure that automatically adapts the service interaction obtrusiveness according to the user¿s context. This infrastructure works from a set of high-level models that define the unobtrusive adaptation behavior and its implication with the interaction resources in a technology-independent way. Our infrastructure has been validated through several experiments to assess its correctness, performance, and the achieved user experience through a user study.This work has been developed with the support of MINECO under the project SMART-ADAPT TIN2013-42981-P, and co-financed by the Generalitat Valenciana under the postdoctoral fellowship APOSTD/2016/042.Gil Pascual, M.; Pelechano Ferragud, V. (2017). Self-adaptive unobtrusive interactions of mobile computing systems. Journal of Ambient Intelligence and Smart Environments. 9(6):659-688. https://doi.org/10.3233/AIS-170463S65968896Aleksy, M., Butter, T., & Schader, M. (2008). Context-Aware Loading for Mobile Applications. Lecture Notes in Computer Science, 12-20. doi:10.1007/978-3-540-85693-1_3Y. Bachvarova, B. van Dijk and A. Nijholt, Towards a unified knowledge-based approach to modality choice, in: Proc. Workshop on Multimodal Output Generation (MOG), 2007, pp. 5–15.Barkhuus, L., & Dey, A. (2003). Is Context-Aware Computing Taking Control away from the User? Three Levels of Interactivity Examined. Lecture Notes in Computer Science, 149-156. doi:10.1007/978-3-540-39653-6_12Bellotti, V., & Edwards, K. (2001). Intelligibility and Accountability: Human Considerations in Context-Aware Systems. Human–Computer Interaction, 16(2-4), 193-212. doi:10.1207/s15327051hci16234_05D. Benavides, P. Trinidad and A. Ruiz-Cortés, Automated reasoning on feature models, in: Proceedings of the 17th International Conference on Advanced Information Systems Engineering, CAiSE’05, Springer-Verlag, Berlin, 2005, pp. 491–503.Bernsen, N. O. (1994). Foundations of multimodal representations: a taxonomy of representational modalities. Interacting with Computers, 6(4), 347-371. doi:10.1016/0953-5438(94)90008-6Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan, A., & Riboni, D. (2010). A survey of context modelling and reasoning techniques. Pervasive and Mobile Computing, 6(2), 161-180. doi:10.1016/j.pmcj.2009.06.002Blumendorf, M., Lehmann, G., & Albayrak, S. (2010). Bridging models and systems at runtime to build adaptive user interfaces. Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’10. doi:10.1145/1822018.1822022D.M. Brown, Communicating Design: Developing Web Site Documentation for Design and Planning, 2nd edn, New Riders Press, 2010.J. Bruin, Statistical Analyses Using SPSS, 2011, http://www.ats.ucla.edu/stat/spss/whatstat/whatstat.htm#1sampt.J. Cámara, G. Moreno and D. Garlan, Reasoning about human participation in self-adaptive systems, in: SEAMS 2015, 2015, pp. 146–156.Campbell, A., & Choudhury, T. (2012). From Smart to Cognitive Phones. IEEE Pervasive Computing, 11(3), 7-11. doi:10.1109/mprv.2012.41Y. Cao, M. Theune and A. Nijholt, Modality effects on cognitive load and performance in high-load information presentation, in: Proceedings of the 14th International Conference on Intelligent User Interfaces, IUI’09, ACM, New York, 2009, pp. 335–344.Chang, F., & Ren, J. (2007). Validating system properties exhibited in execution traces. Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering - ASE ’07. doi:10.1145/1321631.1321723H. Chen and J.P. Black, A quantitative approach to non-intrusive computing, in: Mobiquitous’08: Proceedings of the 5th Annual International Conference on Mobile and Ubiquitous Systems, ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, 2008, pp. 1–10.Chittaro, L. (2010). Distinctive aspects of mobile interaction and their implications for the design of multimodal interfaces. Journal on Multimodal User Interfaces, 3(3), 157-165. doi:10.1007/s12193-010-0036-2Clerckx, T., Vandervelpen, C., & Coninx, K. (2008). Task-Based Design and Runtime Support for Multimodal User Interface Distribution. Lecture Notes in Computer Science, 89-105. doi:10.1007/978-3-540-92698-6_6Cook, D. J., & Das, S. K. (2012). Pervasive computing at scale: Transforming the state of the art. Pervasive and Mobile Computing, 8(1), 22-35. doi:10.1016/j.pmcj.2011.10.004Cornelissen, B., Zaidman, A., van Deursen, A., Moonen, L., & Koschke, R. (2009). A Systematic Survey of Program Comprehension through Dynamic Analysis. IEEE Transactions on Software Engineering, 35(5), 684-702. doi:10.1109/tse.2009.28Czarnecki, K. (2004). Generative Software Development. Lecture Notes in Computer Science, 321-321. doi:10.1007/978-3-540-28630-1_33M. de Sá, C. Duarte, L. Carriço and T. Reis, Designing mobile multimodal applications, in: Information Science Reference, 2010, pp. 106–136, Chapter 5.C. Duarte and L. Carriço, A conceptual framework for developing adaptive multimodal applications, in: Proceedings of the 11th International Conference on Intelligent User Interfaces, IUI’06, ACM, New York, 2006, pp. 132–139.Evers, C., Kniewel, R., Geihs, K., & Schmidt, L. (2014). The user in the loop: Enabling user participation for self-adaptive applications. Future Generation Computer Systems, 34, 110-123. doi:10.1016/j.future.2013.12.010Fagin, R., Halpern, J. Y., & Megiddo, N. (1990). A logic for reasoning about probabilities. Information and Computation, 87(1-2), 78-128. doi:10.1016/0890-5401(90)90060-uFerscha, A. (2012). 20 Years Past Weiser: What’s Next? IEEE Pervasive Computing, 11(1), 52-61. doi:10.1109/mprv.2011.78Floch, J., Frà, C., Fricke, R., Geihs, K., Wagner, M., Lorenzo, J., … Scholz, U. (2012). Playing MUSIC - building context-aware and self-adaptive mobile applications. Software: Practice and Experience, 43(3), 359-388. doi:10.1002/spe.2116Gibbs, W. W. (2005). Considerate Computing. Scientific American, 292(1), 54-61. doi:10.1038/scientificamerican0105-54Gil, M., Giner, P., & Pelechano, V. (2011). Personalization for unobtrusive service interaction. Personal and Ubiquitous Computing, 16(5), 543-561. doi:10.1007/s00779-011-0414-0Gil Pascual, M. (s. f.). Adapting Interaction Obtrusiveness: Making Ubiquitous Interactions Less Obnoxious. A Model Driven Engineering approach. doi:10.4995/thesis/10251/31660Haapalainen, E., Kim, S., Forlizzi, J. F., & Dey, A. K. (2010). Psycho-physiological measures for assessing cognitive load. Proceedings of the 12th ACM international conference on Ubiquitous computing - Ubicomp ’10. doi:10.1145/1864349.1864395Hallsteinsen, S., Geihs, K., Paspallis, N., Eliassen, F., Horn, G., Lorenzo, J., … Papadopoulos, G. A. (2012). A development framework and methodology for self-adapting applications in ubiquitous computing environments. Journal of Systems and Software, 85(12), 2840-2859. doi:10.1016/j.jss.2012.07.052Hassenzahl, M. (2004). The Interplay of Beauty, Goodness, and Usability in Interactive Products. Human-Computer Interaction, 19(4), 319-349. doi:10.1207/s15327051hci1904_2Hassenzahl, M., & Tractinsky, N. (2006). User experience - a research agenda. Behaviour & Information Technology, 25(2), 91-97. doi:10.1080/01449290500330331Ho, J., & Intille, S. S. (2005). Using context-aware computing to reduce the perceived burden of interruptions from mobile devices. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’05. doi:10.1145/1054972.1055100Horvitz, E., Kadie, C., Paek, T., & Hovel, D. (2003). Models of attention in computing and communication. Communications of the ACM, 46(3), 52. doi:10.1145/636772.636798Horvitz, E., Koch, P., Sarin, R., Apacible, J., & Subramani, M. (2005). Bayesphone: Precomputation of Context-Sensitive Policies for Inquiry and Action in Mobile Devices. Lecture Notes in Computer Science, 251-260. doi:10.1007/11527886_33Kephart, J. O., & Chess, D. M. (2003). The vision of autonomic computing. Computer, 36(1), 41-50. doi:10.1109/mc.2003.1160055Korpipaa, P., Malm, E.-J., Rantakokko, T., Kyllonen, V., Kela, J., Mantyjarvi, J., … Kansala, I. (2006). Customizing User Interaction in Smart Phones. IEEE Pervasive Computing, 5(3), 82-90. doi:10.1109/mprv.2006.49S. Lemmelä, A. Vetek, K. Mäkelä and D. Trendafilov, Designing and evaluating multimodal interaction for mobile contexts, in: Proceedings of the 10th International Conference on Multimodal Interfaces, ICMI’08, ACM, New York, 2008, pp. 265–272.Lim, B. Y. (2010). Improving trust in context-aware applications with intelligibility. Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Ubicomp ’10. doi:10.1145/1864431.1864491J.-Y. Mao, K. Vredenburg, P.W. Smith and T. Carey, User-centered design methods in practice: A survey of the state of the art, in: Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research, CASCON’01, IBM Press, 2001, p. 12.Maoz, S. (2009). Using Model-Based Traces as Runtime Models. Computer, 42(10), 28-36. doi:10.1109/mc.2009.336Mayer, R. E., & Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educational Psychologist, 38(1), 43-52. doi:10.1207/s15326985ep3801_6Motti, V. G., & Vanderdonckt, J. (2013). A computational framework for context-aware adaptation of user interfaces. IEEE 7th International Conference on Research Challenges in Information Science (RCIS). doi:10.1109/rcis.2013.6577709R. Murch, Autonomic Computing, IBM Press, 2004.Obrenovic, Z., Abascal, J., & Starcevic, D. (2007). Universal accessibility as a multimodal design issue. Communications of the ACM, 50(5), 83-88. doi:10.1145/1230819.1241668Patterson, D. J., Baker, C., Ding, X., Kaufman, S. J., Liu, K., & Zaldivar, A. (2008). Online everywhere. Proceedings of the 10th international conference on Ubiquitous computing - UbiComp ’08. doi:10.1145/1409635.1409645Pielot, M., de Oliveira, R., Kwak, H., & Oliver, N. (2014). Didn’t you see my message? Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14. doi:10.1145/2556288.2556973Poppinga, B., Heuten, W., & Boll, S. (2014). Sensor-Based Identification of Opportune Moments for Triggering Notifications. IEEE Pervasive Computing, 13(1), 22-29. doi:10.1109/mprv.2014.15S. Ramchurn, B. Deitch, M. Thompson, D. De Roure, N. Jennings and M. Luck, Minimising intrusiveness in pervasive computing environments using multi-agent negotiation, in: Mobile and Ubiquitous Systems: Networking and Services, MOBIQUITOUS 2004. The First Annual International Conference on, 2004, pp. 364–371.C. Roda, Human Attention and Its Implications for Human-Computer Interaction, Cambridge University Press, 2011.S. Rosenthal, A.K. Dey and M. Veloso, Using decision-theoretic experience sampling to build personalized mobile phone interruption models, in: Proceedings of the 9th International Conference on Pervasive Computing, Pervasive 2011, Springer-Verlag, Berlin, 2011, pp. 170–187.E. Rukzio, K. Leichtenstern and V. Callaghan, An experimental comparison of physical mobile interaction techniques: Touching, pointing and scanning, in: 8th International Conference on Ubiquitous Computing, UbiComp 2006, Orange County, California, 2006.Serral, E., Valderas, P., & Pelechano, V. (2010). Towards the Model Driven Development of context-aware pervasive systems. Pervasive and Mobile Computing, 6(2), 254-280. doi:10.1016/j.pmcj.2009.07.006D. Siewiorek, A. Smailagic, J. Furukawa, A. Krause, N. Moraveji, K. Reiger, J. Shaffer and F.L. Wong, Sensay: A context-aware mobile phone, in: Proceedings of the 7th IEEE International Symposium on Wearable Computers, ISWC’03, IEEE Computer Society, Washington, 2003, p. 248.Tedre, M. (2006). What should be automated? Proceedings of the 1st ACM international workshop on Human-centered multimedia - HCM ’06. doi:10.1145/1178745.1178753M. Valtonen, A.-M. Vainio and J. Vanhala, Proactive and adaptive fuzzy profile control for mobile phones, in: IEEE International Conference on Pervasive Computing and Communications, 2009, PerCom, 2009, pp. 1–3.Vastenburg, M. H., Keyson, D. V., & de Ridder, H. (2007). Considerate home notification systems: a field study of acceptability of notifications in the home. Personal and Ubiquitous Computing, 12(8), 555-566. doi:10.1007/s00779-007-0176-xWarnock, D., McGee-Lennon, M., & Brewster, S. (2011). The Role of Modality in Notification Performance. Lecture Notes in Computer Science, 572-588. doi:10.1007/978-3-642-23771-3_43Weiser, M., & Brown, J. S. (1997). The Coming Age of Calm Technology. Beyond Calculation, 75-85. doi:10.1007/978-1-4612-0685-9_6Van Woensel, W., Gil, M., Casteleyn, S., Serral, E., & Pelechano, V. (2013). Adapting the Obtrusiveness of Service Interactions in Dynamically Discovered Environments. Mobile and Ubiquitous Systems: Computing, Networking, and Services, 250-262. doi:10.1007/978-3-642-40238-8_2

    Interruptibility prediction for ubiquitous systems: conventions and new directions from a growing field

    Get PDF
    When should a machine attempt to communicate with a user? This is a historical problem that has been studied since the rise of personal computing. More recently, the emergence of pervasive technologies such as the smartphone have extended the problem to be ever-present in our daily lives, opening up new opportunities for context awareness through data collection and reasoning. Complementary to this there has been increasing interest in techniques to intelligently synchronise interruptions with human behaviour and cognition. However, it is increasingly challenging to categorise new developments, which are often scenario specific or scope a problem with particular unique features. In this paper we present a meta-analysis of this area, decomposing and comparing historical and recent works that seek to understand and predict how users will perceive and respond to interruptions. In doing so we identify research gaps, questions and opportunities that characterise this important emerging field for pervasive technology

    Virtual reality interfaces for seamless interaction with the physical reality

    Get PDF
    In recent years head-mounted displays (HMDs) for virtual reality (VR) have made the transition from research to consumer product, and are increasingly used for productive purposes such as 3D modeling in the automotive industry and teleconferencing. VR allows users to create and experience real-world like models of products; and enables users to have an immersive social interaction with distant colleagues. These solutions are a promising alternative to physical prototypes and meetings, as they require less investment in time and material. VR uses our visual dominance to deliver these experiences, making users believe that they are in another reality. However, while their mind is present in VR their body is in the physical reality. From the user’s perspective, this brings considerable uncertainty to the interaction. Currently, they are forced to take off their HMD in order to, for example, see who is observing them and to understand whether their physical integrity is at risk. This disrupts their interaction in VR, leading to a loss of presence – a main quality measure for the success of VR experiences. In this thesis, I address this uncertainty by developing interfaces that enable users to stay in VR while supporting their awareness of the physical reality. They maintain this awareness without having to take off the headset – which I refer to as seamless interaction with the physical reality. The overarching research vision that guides this thesis is, therefore, to reduce this disconnect between the virtual and physical reality. My research is motivated by a preliminary exploration of user uncertainty towards using VR in co-located, public places. This exploration revealed three main foci: (a) security and privacy, (b) communication with physical collaborators, and (c) managing presence in both the physical and virtual reality. Each theme represents a section in my dissertation, in which I identify central challenges and give directions towards overcoming them as have emerged from the work presented here. First, I investigate security and privacy in co-located situations by revealing to what extent bystanders are able to observe general tasks. In this context, I explicitly investigate the security considerations of authentication mechanisms. I review how existing authentication mechanisms can be transferred to VR and present novel approaches that are more usable and secure than existing solutions from prior work. Second, to support communication between VR users and physical collaborators, I add to the field design implications for VR interactions that enable observers to choose opportune moments to interrupt HMD users. Moreover, I contribute methods for displaying interruptions in VR and discuss their effect on presence and performance. I also found that different virtual presentations of co-located collaborators have an effect on social presence, performance and trust. Third, I close my thesis by investigating methods to manage presence in both the physical and virtual realities. I propose systems and interfaces for transitioning between them that empower users to decide how much they want to be aware of the other reality. Finally, I discuss the opportunity to systematically allocate senses to these two realities: the visual one for VR and the auditory and haptic one for the physical reality. Moreover, I provide specific design guidelines on how to use these findings to alert VR users about physical borders and obstacles.In den letzten Jahren haben Head-Mounted-Displays (HMDs) für virtuelle Realität (VR) den Übergang von der Forschung zum Konsumprodukt vollzogen und werden zunehmend für produktive Zwecke, wie 3D-Modellierung in der Automobilindustrie oder Telekonferenzen, eingesetzt. VR ermöglicht es den Benutzern, schnell und kostengünstig, Prototypen zu erstellen und erlaubt eine immersive soziale Interaktion mit entfernten Kollegen. VR nutzt unsere visuelle Dominanz, um diese Erfahrungen zu vermitteln und gibt Benutzern das Gefühl sich in einer anderen Realität zu befinden. Während der Nutzer jedoch in der virtuellen Realität mental präsent ist, befindet sich der Körper weiterhin in der physischen Realität. Aus der Perspektive des Benutzers bringt dies erhebliche Unsicherheit in die Nutzung von HMDs. Aktuell sind Nutzer gezwungen, ihr HMD abzunehmen, um zu sehen, wer sie beobachtet und zu verstehen, ob ihr körperliches Wohlbefinden gefährdet ist. Dadurch wird ihre Interaktion in der VR gestört, was zu einem Verlust der Präsenz führt - ein Hauptqualitätsmaß für den Erfolg von VR-Erfahrungen. In dieser Arbeit befasse ich mich mit dieser Unsicherheit, indem ich Schnittstellen entwickle, die es den Nutzern ermöglichen, in VR zu bleiben und gleichzeitig unterstützen sie die Wahrnehmung für die physische Realität. Sie behalten diese Wahrnehmung für die physische Realität bei, ohne das Headset abnehmen zu müssen - was ich als nahtlose Interaktion mit der physischen Realität bezeichne. Daher ist eine übergeordenete Vision von meiner Forschung diese Trennung von virtueller und physicher Realität zu reduzieren. Meine Forschung basiert auf einer einleitenden Untersuchung, die sich mit der Unsicherheit der Nutzer gegenüber der Verwendung von VR an öffentlichen, geteilten Orten befasst. Im Kontext meiner Arbeit werden Räume oder Flächen, die mit anderen ortsgleichen Menschen geteilt werden, als geteilte Orte bezeichnet. Diese Untersuchung ergab drei Hauptschwerpunkte: (1) Sicherheit und Privatsphäre, (2) Kommunikation mit physischen Kollaborateuren, und (3) Umgang mit der Präsenz, sowohl in der physischen als auch in der virtuellen Realität. Jedes Thema stellt einen Fokus in meiner Dissertation dar, in dem ich zentrale Herausforderungen identifiziere und Lösungsansätze vorstelle. Erstens, untersuche ich Sicherheit und Privatsphäre an öffentlichen, geteilten Orten, indem ich aufdecke, inwieweit Umstehende in der Lage sind, allgemeine Aufgaben zu beobachten. In diesem Zusammenhang untersuche ich explizit die Gestaltung von Authentifizierungsmechanismen. Ich untersuche, wie bestehende Authentifizierungsmechanismen auf VR übertragen werden können, und stelle neue Ansätze vor, die nutzbar und sicher sind. Zweitens, um die Kommunikation zwischen HMD-Nutzern und Umstehenden zu unterstützen, erweitere ich das Forschungsfeld um VR-Interaktionen, die es Beobachtern ermöglichen, günstige Momente für die Unterbrechung von HMD-Nutzern zu wählen. Darüber hinaus steuere ich Methoden zur Darstellung von Unterbrechungen in VR bei und diskutiere ihre Auswirkungen auf Präsenz und Leistung von Nutzern. Meine Arbeit brachte auch hervor, dass verschiedene virtuelle Präsentationen von ortsgleichen Kollaborateuren einen Effekt auf die soziale Präsenz, Leistung und Vertrauen haben. Drittens, schließe ich meine Dissertation mit der Untersuchung von Methoden zur Verwaltung der Präsenz, sowohl in der physischen als auch in der virtuellen Realität ab. Ich schlage Systeme und Schnittstellen für den Übergang zwischen den Realitäten vor, die die Benutzer in die Lage versetzen zu entscheiden, inwieweit sie sich der anderen Realität bewusst sein wollen. Schließlich diskutiere ich die Möglichkeit, diesen beiden Realitäten systematisch Sinne zuzuordnen: die visuelle für VR und die auditive und haptische für die physische Realität. Darüber hinaus stelle ich spezifische Design-Richtlinien zur Verfügung, wie diese Erkenntnisse genutzt werden können, um VR-Anwender auf physische Grenzen und Hindernisse aufmerksam zu machen
    • …
    corecore