1,185 research outputs found

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, fĂŒhren zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhĂ€rent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natĂŒrliche Interaktionstechniken als hilfreich fĂŒr die Datenanalyse erwiesen. DarĂŒber hinaus spielt in solchen AnwendungsfĂ€llen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext fĂŒr die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung gefĂŒhrt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion fĂŒr diese oft komplexen Systeme. In meiner Dissertation beschĂ€ftige ich mich mit dieser Herausforderung, indem ich die Interaktion fĂŒr immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von rĂ€umlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann rĂ€umliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen fĂŒr immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. FĂŒr die zweite Frage untersuche ich, wie insbesondere die rĂ€umliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit rĂ€umlichen GerĂ€ten im Vergleich zur Touch-Eingabe, die Verwendung zusĂ€tzlicher mobiler GerĂ€te als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darĂŒber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie rĂ€umliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstĂŒtzen können

    PainDroid: An android-based virtual reality application for pain assessment

    Get PDF
    Earlier studies in the field of pain research suggest that little efficient intervention currently exists in response to the exponential increase in the prevalence of pain. In this paper, we present an Android application (PainDroid) with multimodal functionality that could be enhanced with Virtual Reality (VR) technology, which has been designed for the purpose of improving the assessment of this notoriously difficult medical concern. Pain- Droid has been evaluated for its usability and acceptability with a pilot group of potential users and clinicians, with initial results suggesting that it can be an effective and usable tool for improving the assessment of pain. Participant experiences indicated that the application was easy to use and the potential of the application was similarly appreciated by the clinicians involved in the evaluation. Our findings may be of considerable interest to healthcare providers, policy makers, and other parties that might be actively involved in the area of pain and VR research

    The Work Avatar Face-Off: Knowledge Worker Preferences for Realism in Meetings

    Full text link
    While avatars have grown in popularity in social settings, their use in the workplace is still debatable. We conducted a large-scale survey to evaluate knowledge worker sentiment towards avatars, particularly the effects of realism on their acceptability for work meetings. Our survey of 2509 knowledge workers from multiple countries rated five avatar styles for use by managers, known colleagues and unknown colleagues. In all scenarios, participants favored higher realism, but fully realistic avatars were sometimes perceived as uncanny. Less realistic avatars were rated worse when interacting with an unknown colleague or manager, as compared to a known colleague. Avatar acceptability varied by country, with participants from the United States and South Korea rating avatars more favorably. We supplemented our quantitative findings with a thematic analysis of open-ended responses to provide a comprehensive understanding of factors influencing work avatar choices. In conclusion, our results show that realism had a significant positive correlation with acceptability. Non-realistic avatars were seen as fun and playful, but only suitable for occasional use.Comment: 10 pages, accepted at ISMAR 2023 conferenc

    Merging the Real and the Virtual: An Exploration of Interaction Methods to Blend Realities

    Get PDF
    We investigate, build, and design interaction methods to merge the real with the virtual. An initial investigation looks at spatial augmented reality (SAR) and its effects on pointing with a real mobile phone. A study reveals a set of trade-offs between the raycast, viewport, and direct pointing techniques. To further investigate the manipulation of virtual content within a SAR environment, we design an interaction technique that utilizes the distance that a user holds mobile phone away from their body. Our technique enables pushing virtual content from a mobile phone to an external SAR environment, interact with that content, rotate-scale-translate it, and pull the content back into the mobile phone. This is all done in a way that ensures seamless transitions between the real environment of the mobile phone and the virtual SAR environment. To investigate the issues that occur when the physical environment is hidden by a fully immersive virtual reality (VR) HMD, we design and investigate a system that merges a realtime 3D reconstruction of the real world with a virtual environment. This allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical reality without losing their sense of immersion or presence inside a virtual world. A study with VR users demonstrates the affordances provided by the system and how it can be used to enhance current VR experiences. We then move to AR, to investigate the limitations of optical see-through HMDs and the problem of communicating the internal state of the virtual world with unaugmented users. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we propose a system that combines a wearable SAR projector. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration. We then turn to videogames to investigate how spectatorship of these virtual environments can be enhanced through expanded video rendering techniques. We extract and combine additional data to form a cumulative 3D representation of the live game environment for spectators, which enables each spectator to individually control a personal view into the stream while in VR. A study shows that users prefer spectating in VR when compared with a comparable desktop rendering

    Senseable Spaces: from a theoretical perspective to the application in augmented environments

    Get PDF
    openGrazie all’ enorme diffusione di dispositivi senzienti nella vita di tutti i giorni, nell’ ultimo decennio abbiamo assistito ad un cambio definitivo nel modo in cui gli utenti interagiscono con lo spazio circostante. Viene coniato il termine Spazio Sensibile, per descrivere quegli spazi in grado di fornire servizi contestuali agli utenti, misurando e analizzando le dinamiche che in esso avvengono, e di reagire conseguentemente a questo continuo flusso di dati bidirezionale. La ricerca ù stata condotta abbracciando diversi domini di applicazione, le cui singole esigenze hanno reso necessario testare il concetto di Spazi Sensibili in diverse declinazioni, mantenendo al centro della ricerca l’utente, con la duplice accezione di end-user e manager. Molteplici sono i contributi rispetto allo stato dell’ arte. Il concetto di Spazio Sensibile ù stato calato nel settore dei Beni Culturali, degli Spazi Pubblici, delle Geosciences e del Retail. I casi studio nei musei e nella archeologia dimostrano come l’ utilizzo della Realtà Aumentata possa essere sfruttata di fronte a un dipinto o in outdoor per la visualizzazione di modelli complessi, In ambito urbano, il monitoraggio di dati generati dagli utenti ha consentito di capire le dinamiche di un evento di massa, durante il quale le stesse persone fruivano di servizi contestuali. Una innovativa applicazione di Realtà Aumentata ù stata come servizio per facilitare l’ ispezione di fasce tampone lungo i fiumi, standardizzando flussi di dati e modelli provenienti da un Sistema Informativo Territoriale. Infine, un robusto sistema di indoor localization ù stato istallato in ambiente retail, per scopi classificazione dei percorsi e per determinare le potenzialità di un punto vendita. La tesi ù inoltre una dimostrazione di come Space Sensing e Geomatica siano discipline complementari: la geomatica consente di acquisire e misurare dati geo spaziali e spazio temporali a diversa scala, lo Space Sensing utilizza questi dati per fornire servizi all’ utente precisi e contestuali.Given the tremendous growth of ubiquitous services in our daily lives, during the last few decades we have witnessed a definitive change in the way users' experience their surroundings. At the current state of art, devices are able to sense the environment and users’ location, enabling them to experience improved digital services, creating synergistic loop between the use of the technology, and the use of the space itself. We coined the term Senseable Space, to define the kinds of spaces able to provide users with contextual services, to measure and analyse their dynamics and to react accordingly, in a seamless exchange of information. Following the paradigm of Senseable Spaces as the main thread, we selected a set of experiences carried out in different fields; central to this investigation there is of course the user, placed in the dual roles of end-user and manager. The main contribution of this thesis lies in the definition of this new paradigm, realized in the following domains: Cultural Heritage, Public Open Spaces, Geosciences and Retail. For the Cultural Heritage panorama, different pilot projects have been constructed from creating museum based installations to developing mobile applications for archaeological settings. Dealing with urban areas, app-based services are designed to facilitate the route finding in a urban park and to provide contextual information in a city festival. We also outlined a novel application to facilitate the on-site inspection by risk managers thanks to the use of Augmented Reality services. Finally, a robust indoor localization system has been developed, designed to ease customer profiling in the retail sector. The thesis also demonstrates how Space Sensing and Geomatics are complementary to one another, given the assumption that the branches of Geomatics cover all the different scales of data collection, whilst Space Sensing gives one the possibility to provide the services at the correct location, at the correct time.INGEGNERIA DELL'INFORMAZIONEembargoed_20181001Pierdicca, RobertoPierdicca, Robert

    Senseable Spaces: from a theoretical perspective to the application in augmented environments

    Get PDF
    Grazie all’ enorme diffusione di dispositivi senzienti nella vita di tutti i giorni, nell’ ultimo decennio abbiamo assistito ad un cambio definitivo nel modo in cui gli utenti interagiscono con lo spazio circostante. Viene coniato il termine Spazio Sensibile, per descrivere quegli spazi in grado di fornire servizi contestuali agli utenti, misurando e analizzando le dinamiche che in esso avvengono, e di reagire conseguentemente a questo continuo flusso di dati bidirezionale. La ricerca ù stata condotta abbracciando diversi domini di applicazione, le cui singole esigenze hanno reso necessario testare il concetto di Spazi Sensibili in diverse declinazioni, mantenendo al centro della ricerca l’utente, con la duplice accezione di end-user e manager. Molteplici sono i contributi rispetto allo stato dell’ arte. Il concetto di Spazio Sensibile ù stato calato nel settore dei Beni Culturali, degli Spazi Pubblici, delle Geosciences e del Retail. I casi studio nei musei e nella archeologia dimostrano come l’ utilizzo della Realtà Aumentata possa essere sfruttata di fronte a un dipinto o in outdoor per la visualizzazione di modelli complessi, In ambito urbano, il monitoraggio di dati generati dagli utenti ha consentito di capire le dinamiche di un evento di massa, durante il quale le stesse persone fruivano di servizi contestuali. Una innovativa applicazione di Realtà Aumentata ù stata come servizio per facilitare l’ ispezione di fasce tampone lungo i fiumi, standardizzando flussi di dati e modelli provenienti da un Sistema Informativo Territoriale. Infine, un robusto sistema di indoor localization ù stato istallato in ambiente retail, per scopi classificazione dei percorsi e per determinare le potenzialità di un punto vendita. La tesi ù inoltre una dimostrazione di come Space Sensing e Geomatica siano discipline complementari: la geomatica consente di acquisire e misurare dati geo spaziali e spazio temporali a diversa scala, lo Space Sensing utilizza questi dati per fornire servizi all’ utente precisi e contestuali.Given the tremendous growth of ubiquitous services in our daily lives, during the last few decades we have witnessed a definitive change in the way users' experience their surroundings. At the current state of art, devices are able to sense the environment and users’ location, enabling them to experience improved digital services, creating synergistic loop between the use of the technology, and the use of the space itself. We coined the term Senseable Space, to define the kinds of spaces able to provide users with contextual services, to measure and analyse their dynamics and to react accordingly, in a seamless exchange of information. Following the paradigm of Senseable Spaces as the main thread, we selected a set of experiences carried out in different fields; central to this investigation there is of course the user, placed in the dual roles of end-user and manager. The main contribution of this thesis lies in the definition of this new paradigm, realized in the following domains: Cultural Heritage, Public Open Spaces, Geosciences and Retail. For the Cultural Heritage panorama, different pilot projects have been constructed from creating museum based installations to developing mobile applications for archaeological settings. Dealing with urban areas, app-based services are designed to facilitate the route finding in a urban park and to provide contextual information in a city festival. We also outlined a novel application to facilitate the on-site inspection by risk managers thanks to the use of Augmented Reality services. Finally, a robust indoor localization system has been developed, designed to ease customer profiling in the retail sector. The thesis also demonstrates how Space Sensing and Geomatics are complementary to one another, given the assumption that the branches of Geomatics cover all the different scales of data collection, whilst Space Sensing gives one the possibility to provide the services at the correct location, at the correct time

    Grand Challenges in Immersive Analytics

    Get PDF
    The definitive version will be published in CHI 2021, May 8–13, 2021, Yokohama, JapanInternational audienceImmersive Analytics is a quickly evolving field that unites several areas such as visualisation, immersive environments, and humancomputer interaction to support human data analysis with emerging technologies. This research has thrived over the past years with multiple workshops, seminars, and a growing body of publications, spanning several conferences. Given the rapid advancement of interaction technologies and novel application domains, this paper aims toward a broader research agenda to enable widespread adoption. We present 17 key research challenges developed over multiple sessions by a diverse group of 24 international experts, initiated from a virtual scientific workshop at ACM CHI 2020. These challenges aim to coordinate future work by providing a systematic roadmap of current directions and impending hurdles to facilitate productive and effective applications for Immersive Analytics

    Presence and agency in real and virtual spaces: The promise of extended reality for language learning

    Get PDF
    Augmented and virtual realities (together “extended reality”) offer language learners the opportunity to communicate and interact in real and virtual spaces. In augmented reality (AR), users view computer-generated layers added to a phone camera’s view of the world. Virtual reality (VR) immerses users in a 3D environment that might simulate aspects of the outside world or project an entirely imagined reality. This column looks at opportunities and challenges in the use of extended reality (XR) for second language learning. Opportunities include higher learner motivation and personal agency through XR uses that feature collaboration and open-ended interactions, particularly in simulations, games, and learner co-design. That direction offers more alignment with current theories of second language acquisition (SLA)–emphasizing holistic language development and ecological frameworks–than most commercial VR apps currently available. Those posit a linear language development and focus largely on vocabulary learning and language practice within closed role-play scenarios. Offering both AR and VR access, mixed reality may present opportunities to combine the best features of each medium. Advances in generative artificial intelligence (AI) provide additional possibilities for personalized language learning in a flexible and dynamic VR environment
    • 

    corecore