1,127 research outputs found

    Investigating Social Presence and Communication with Embodied Avatars in Room-Scale Virtual Reality

    Get PDF
    Submission includes video.Room-scale virtual reality (VR) holds great potential as a medium for communication and collaboration in remote and same-time, same-place settings. Related work has established that movement realism can create a strong sense of social presence, even in the absence of photorealism. Here, we explore the noteworthy attributes of communicative interaction using embodied minimal avatars in room-scale VR in the same-time, same-place setting. Our system is the first in the research community to enable this kind of interaction, as far as we are aware. We carried out an experiment in which pairs of users performed two activities in contrasting variants: VR vs. face-to-face (F2F), and 2D vs. 3D. Objective and subjective measures were used to compare these, including motion analysis, electrodermal activity, questionnaires, retrospective think-aloud protocol, and interviews. On the whole, participants communicated effectively in VR to complete their tasks, and reported a strong sense of social presence. The system's high fidelity capture and display of movement seems to have been a key factor in supporting this. Our results confirm some expected shortcomings of VR compared to F2F, but also some non-obvious advantages. The limited anthropomorphic properties of the avatars presented some difficulties, but the impact of these varied widely between the activities. In the 2D vs. 3D comparison, the basic affordance of freehand drawing in 3D was new to most participants, resulting in novel observations and open questions. We also present methodological observations across all conditions concerning the measures that did and did not reveal differences between conditions, including unanticipated properties of the think-aloud protocol applied to VR

    When the Elephant Trumps": A Comparative Study on Spatial Audio for Orientation in 360â—¦ Videos

    Get PDF
    Orientation is an emerging issue in cinematic Virtual Reality (VR), as viewers may fail in locating points of interest. Recent strategies to tackle this research problem have investigated the role of cues, specifically diegetic sound effects. In this paper, we examine the use of sound spatialization for orien tation purposes, namely by studying different spatialization conditions ("none", "partial", and "full" spatial manipulation) of multitrack soundtracks. We performed a between-subject mixed-methods study with 36 participants, aided by Cue Control, a tool we developed for dynamic spatial sound edit ing and data collection/analysis. Based on existing literature on orientation cues in 360â—¦ and theories on human listening, we discuss situations in which the spatialization was more ef fective (namely, "full" spatial manipulation both when using only music and when combining music and diegetic effects), and how this can be used by creators of 360â—¦ videos.info:eu-repo/semantics/publishedVersio

    An Overview of Capturing Live Experience with Virtual and Augmented Reality

    Get PDF
    In this paper, we review the use of virtual and augmented reality technologies for capture and externalization of tacit knowledge from complex activity in knowledge-intensive professions. We focus on technologies for converting experience hidden in activity with the aim to boost industry competitiveness, innovation, and facilitate learning on the job. As such types of knowledge and experience are difficult to capture and represent in traditional media, we explore emerging technology along two lines of investigation. First, we look at applications of virtual reality to then, second, focus on using sensors, augmented reality, and wearable technologies. We discuss existing and future applications of experience capturing with virtual and augmented reality technologies. This review provides a comprehensive overview for those interested in recording virtual, real, and augmented reality technologies. This review provides a comprehensive overview for those interested in recording virtual, real, and augmented activities, methods for delivering the recorded data, and extracting knowledge

    Seamful interweaving: heterogeneity in the theory and design of interactive systems

    Get PDF
    Design experience and theoretical discussion suggest that a narrow design focus on one tool or medium as primary may clash with the way that everyday activity involves the interweaving and combination of many heterogeneous media. Interaction may become seamless and unproblematic, even if the differences, boundaries and 'seams' in media are objectively perceivable. People accommodate and take advantage of seams and heterogeneity, in and through the process of interaction. We use an experiment with a mixed reality system to ground and detail our discussion of seamful design, which takes account of this process, and theory that reflects and informs such design. We critique the 'disappearance' mentioned by Weiser as a goal for ubicomp, and Dourish's 'embodied interaction' approach to HCI, suggesting that these design ideals may be unachievable or incomplete because they underemphasise the interdependence of 'invisible' non-rationalising interaction and focused rationalising interaction within ongoing activity

    Collaborative Interaction Techniques in Virtual Reality for Emergency Management

    Get PDF
    Virtual Reality (VR) technology has had many interesting applications over the last decades. It can be seen in a multitude of industries: entertainment, education, tourism to crisis management among others. Many of them, feature collaborative uses of VR technology. This thesis presents the design, development and evaluation of a multi-user VR system, aimed at collaborative usage focused on a crisis scenario based on real-life wildfire as the use case. The system also features a dual-map interface to display geographical information, providing both two-dimensional and three-dimensional views over the region and data relevant to the scenario. The main goals of this thesis are to understand how people can collaborate in VR, test which interface is preferred, as well as what kinds of notification mechanisms are more user friendly. The Virtual Environment (VE) displays relevant geo-located information, such as roads, towns, vehicles and the wildfire itself, in a dual-map setup, in two and three dimensions. Users are able to share the environment and, simultaneously, use available tools to interact with the maps and communicate with each other, while controlling the wildfire playback time to understand how it propagates. Actions such as drawing, measuring distances, directing vehicles and notifying other users are available. Users can propose actions that can then be accepted or denied. Eighteen subjects took part in a user study to evaluate the application. Participants were asked to perform several tasks, using the tools available, while sharing that same environment with the researcher. Upon analyzing data from the testing sessions, it is possible to state that most users agree they would be able to use the system to collaborate. The results also support the presence of both types of map interfaces, two-dimensional and three-dimensional, as they are objectively better suited for different tasks; users, subjectively, affirmed preference for both of them, depending on the task at hand.A Realidade Virtual (RV) tem demonstrado ter várias aplicações interessantes ao longo das últimas décadas. Faz parte de múltiplas indústrias, tais como entertenimento, educação, turismo, gestão de crises, entre outras. Muitas delas usam a tecnologia num contexto colaborativo. Nesta tese é apresentado o design, desenvolvimento e avaliação de um sistema multiutilizador de RV, dedicado ao uso colaborativo durante um cenário de crise baseado num fogo real. É também implementada uma interface dual-map que visualiza informação geográfica, providenciando duas vistas (2D e 3D) sobre a região e dados relevantes ao cenário descrito. Perceber como podem as pessoas colaborar em RV, testar qual a interface preferida e quais os tipos de mecanismos de notificação preferíveis são os objectivos principais desta tese. O Ambiente Virtual (AV) apresenta informação geo-referenciada relevante, como estradas, povoações, veículos e o próprio incêndio, através da interface dual. Utilizadores podem partilhar o ambiente e, simultaneamente, usar as ferramentas disponíveis para interagir com os mapas e comunicar entre si, enquanto controlam o progresso do incêndio para melhor entender como se propaga. Ações como desenhar, medir distâncias, direcionar veículos e notificar outros utilizadores estão disponíveis. Utilizadores podem também propor ações que serão aceites ou recusadas. Dezoito pessoas fizeram parte do estudo de utilizador para avaliar a aplicação. Os participantes executaram múltiplas tarefas, usando as ferramentas disponíveis, enquanto partilhavam o mesmo AV que o investigador. Após análise dos dados gerados, é possível afirmar que a maioria dos participantes consideram que seriam capazes de usar o sistema para colaborar. Os resultados também suportam a presença de ambos os tipos de mapas, 2D e 3D, pois ambos são objectivamente melhores para tarefas distintas; participantes, subjectivamente, afirmam preferência por ambas, dependendo da tarefa a executar

    iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking

    Full text link
    Video continues to dominate network traffic, yet operators today have poor visibility into the number, duration, and resolutions of the video streams traversing their domain. Current approaches are inaccurate, expensive, or unscalable, as they rely on statistical sampling, middle-box hardware, or packet inspection software. We present {\em iTelescope}, the first intelligent, inexpensive, and scalable SDN-based solution for identifying and classifying video flows in real-time. Our solution is novel in combining dynamic flow rules with telemetry and machine learning, and is built on commodity OpenFlow switches and open-source software. We develop a fully functional system, train it in the lab using multiple machine learning algorithms, and validate its performance to show over 95\% accuracy in identifying and classifying video streams from many providers including Youtube and Netflix. Lastly, we conduct tests to demonstrate its scalability to tens of thousands of concurrent streams, and deploy it live on a campus network serving several hundred real users. Our system gives unprecedented fine-grained real-time visibility of video streaming performance to operators of enterprise and carrier networks at very low cost.Comment: 12 pages, 16 figure

    See or Hear? Exploring the Effect of Visual and Audio Hints and Gaze-assisted Task Feedback for Visual Search Tasks in Augmented Reality

    Full text link
    Augmented reality (AR) is emerging in visual search tasks for increasingly immersive interactions with virtual objects. We propose an AR approach providing visual and audio hints along with gaze-assisted instant post-task feedback for search tasks based on mobile head-mounted display (HMD). The target case was a book-searching task, in which we aimed to explore the effect of the hints together with the task feedback with two hypotheses. H1: Since visual and audio hints can positively affect AR search tasks, the combination outperforms the individuals. H2: The gaze-assisted instant post-task feedback can positively affect AR search tasks. The proof-of-concept was demonstrated by an AR app in HMD and a comprehensive user study (n=96) consisting of two sub-studies, Study I (n=48) without task feedback and Study II (n=48) with task feedback. Following quantitative and qualitative analysis, our results partially verified H1 and completely verified H2, enabling us to conclude that the synthesis of visual and audio hints conditionally improves the AR visual search task efficiency when coupled with task feedback.Comment: In Proceedings of 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR
    • …
    corecore