2,153 research outputs found

    Nintendo Wii Remote Controller in Higher Education: Development and Evaluation of a Demonstrator Kit for e-Teaching

    Get PDF
    The increasing availability of game based technologies together with advances in Human-Computer Interaction (HCI) and usability engineering provides new challenges and opportunities to virtual environments in the context of e-Teaching. Consequently, an evident trend is to offer learners with the equivalent of practical learning experiences, whilst supporting creativity for both teachers and learners. Current market surveys showed surprisingly that the Wii remote controller (Wiimote) is more widely spread than standard PCs and is the most used computer input device worldwide, which given its collection of sensors, accelerometers and bluetooth technology, makes it of great interest for HCI experiments in e-Learning/e-Teaching. In this paper we discuss the importance of gestures for teaching and describe the design and development of a low-cost demonstrator kit based on Wiimote enhancing the quality of the lecturing with gestures

    Multisensory 360 videos under varying resolution levels enhance presence

    Get PDF
    Omnidirectional videos have become a leading multimedia format for Virtual Reality applications. While live 360â—¦ videos offer a unique immersive experience, streaming of omnidirectional content at high resolutions is not always feasible in bandwidth-limited networks. While in the case of flat videos, scaling to lower resolutions works well, 360â—¦ video quality is seriously degraded because of the viewing distances involved in head-mounted displays. Hence, in this paper, we investigate first how quality degradation impacts the sense of presence in immersive Virtual Reality applications. Then, we are pushing the boundaries of 360â—¦ technology through the enhancement with multisensory stimuli. 48 participants experimented both 360â—¦ scenarios (with and without multisensory content), while they were divided randomly between four conditions characterised by different encoding qualities (HD, FullHD, 2.5K, 4K). The results showed that presence is not mediated by streaming at a higher bitrate. The trend we identified revealed however that presence is positively and significantly impacted by the enhancement with multisensory content. This shows that multisensory technology is crucial in creating more immersive experiences

    Mutual Gaze Support in Videoconferencing Reviewed

    Get PDF
    Videoconferencing allows geographically dispersed parties to communicate by simultaneous audio and video transmissions. It is used in a variety of application scenarios with a wide range of coordination needs and efforts, such as private chat, discussion meetings, and negotiation tasks. In particular, in scenarios requiring certain levels of trust and judgement non-verbal communication, cues are highly important for effective communication. Mutual gaze support plays a central role in those high coordination need scenarios but generally lacks adequate technical support from videoconferencing systems. In this paper, we review technical concepts and implementations for mutual gaze support in videoconferencing, classify them, evaluate them according to a defined set of criteria, and give recommendations for future developments. Our review gives decision makers, researchers, and developers a tool to systematically apply and further develop videoconferencing systems in serious settings requiring mutual gaze. This should lead to well-informed decisions regarding the use and development of this technology and to a more widespread exploitation of the benefits of videoconferencing in general. For example, if videoconferencing systems supported high-quality mutual gaze in an easy-to-set-up and easy-to-use way, we could hold more effective and efficient recruitment interviews, court hearings, or contract negotiations

    Multi-party holomeetings: toward a new era of low-cost volumetric holographic meetings in virtual reality

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Fueled by advances in multi-party communications, increasingly mature immersive technologies being adopted, and the COVID-19 pandemic, a new wave of social virtual reality (VR) platforms have emerged to support socialization, interaction, and collaboration among multiple remote users who are integrated into shared virtual environments. Social VR aims to increase levels of (co-)presence and interaction quality by overcoming the limitations of 2D windowed representations in traditional multi-party video conferencing tools, although most existing solutions rely on 3D avatars to represent users. This article presents a social VR platform that supports real-time volumetric holographic representations of users that are based on point clouds captured by off-the-shelf RGB-D sensors, and it analyzes the platform’s potential for conducting interactive holomeetings (i.e., holoconferencing scenarios). This work evaluates such a platform’s performance and readiness for conducting meetings with up to four users, and it provides insights into aspects of the user experience when using single-camera and low-cost capture systems in scenarios with both frontal and side viewpoints. Overall, the obtained results confirm the platform’s maturity and the potential of holographic communications for conducting interactive multi-party meetings, even when using low-cost systems and single-camera capture systems in scenarios where users are sitting or have a limited translational movement along the X, Y, and Z axes within the 3D virtual environment (commonly known as 3 Degrees of Freedom plus, 3DoF+).The authors would like to thank the members of the EU H2020 VR-Together consortium for their valuable contributions, especially Marc Martos and Mohamad Hjeij for their support in developing and evaluating tasks. This work has been partially funded by: the EU’s Horizon 2020 program, under agreement nº 762111 (VR-Together project); by ACCIÓ (Generalitat de Catalunya), under agreement COMRDI18-1-0008 (ViVIM project); and by Cisco Research and the Silicon Valley Community Foundation, under the grant Extended Reality Multipoint Control Unit (ID: 1779376). The work by Mario Montagud has been additionally funded by Spain’s Agencia Estatal de Investigación under grant RYC2020-030679-I (AEI / 10.13039/501100011033) and by Fondo Social Europeo. The work of David Rincón was supported by Spain’s Agencia Estatal de Investigación within the Ministerio de Ciencia e Innovación under Project PID2019-108713RB-C51 MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Designing passenger experiences for in-car Mixed Reality

    Get PDF
    In day-to-day life, people spend a considerable amount of their time on the road. People seek to invest travel time for work and well-being through interaction with mobile and multimedia applications on personal devices such as smartphones and tablets. However, for new computing paradigms, such as mobile mixed reality (MR), their usefulness in this everyday transport context, in-car MR remains challenging. When future passengers immerse in three-dimensional virtual environments, they become increasingly disconnected from the cabin space, vehicle motion, and other people around them. This degraded awareness of the real environment endangers the passenger experience on the road, which initially motivates this thesis to question: can immersive technology become useful in the everyday transport context, such as for in-car scenarios? If so, how should we design in-car MR technology to foster passenger access and connectedness to both physical and virtual worlds, ensuring ride safety, comfort, and joy? To this aim, this thesis contributes via three aspects: 1) Understanding passenger use of in-car MR —first, I present a model for in-car MR interaction through user research. As interviews with daily commuters reveal, passengers are concerned with their physical integrity when facing spatial conflicts between borderless virtual environments and the confined cabin space. From this, the model aims to help researchers spatially organize information and how user interfaces vary in the proximity of the user. Additionally, a field experiment reveals contextual feedback about motion sickness when using immersive technology on the road. This helps refine the model and instruct the following experiments. 2) Mixing realities in car rides —second, this thesis explores a series of prototypes and experiments to examine how in-car MR technology can enable passengers to feel present in virtual environments while maintaining awareness of the real environment. The results demonstrate technical solutions for physical integrity and situational awareness by incorporating essential elements of the RE into virtual reality. Empirical evidence provides a set of dimensions into the in-car MR model, guiding the design decisions of mixing realities. 3) Transcending the transport context —third, I extend the model to other everyday contexts beyond transport that share spatial and social constraints, such as the confined and shared living space at home. A literature review consolidates leveraging daily physical objects as haptic feedback for MR interaction across spatial scales. A laboratory experiment discovers how context-aware MR systems that consider physical configurations can support social interaction with copresent others in close shared spaces. These results substantiate the scalability of the in-car MR model to other contexts. Finally, I conclude with a holistic model for mobile MR interaction across everyday contexts, from home to on the road. With my user research, prototypes, empirical evaluation, and model, this thesis paves the way for understanding the future passenger use of immersive technology, addressing today’s technical limitations of MR in mobile interaction, and ultimately fostering mobile users’ ubiquitous access and close connectedness to MR anytime and anywhere in their daily lives.Im modernen Leben verbringen die Menschen einen beträchtlichen Teil ihrer Zeit mit dem täglichen Pendeln. Die Menschen versuchen, die Reisezeit für ihre Arbeit und ihr Wohlbefinden durch die Interaktion mit mobilen und multimedialen Anwendungen auf persönlichen Geräten wie Smartphones und Tablets zu nutzen. Doch für neue Computing-Paradigmen, wie der mobilen Mixed Reality (MR), bleibt ihre Nützlichkeit in diesem alltäglichen Verkehrskontext, der MR im Auto, eine Herausforderung. Wenn künftige Passagiere in dreidimensionale virtuelle Umgebungen eintauchen, werden sie zunehmend von der Kabine, der Fahrzeugbewegung und den Menschen in ihrer Umgebung abgekoppelt. Diese verminderte Wahrnehmung der realen Umgebung gefährdet das Fahrverhalten der Passagiere im Straßenverkehr, was diese Arbeit zunächst zu der Frage motiviert: Können immersive Systeme im alltäglichen Verkehrskontext, z.B. in Fahrzeugszenarien, nützlich werden? Wenn ja, wie sollten wir die MR-Technologie im Auto gestalten, um den Zugang und die Verbindung der Passagiere mit der physischen und der virtuellen Welt zu fördern und dabei Sicherheit, Komfort und Freude an der Fahrt zu gewährleisten? Zu diesem Zweck trägt diese Arbeit zu drei Aspekten bei: 1) Verständnis der Nutzung von MR im Auto durch die Passagiere - Zunächst wird ein Modell für die MR-Interaktion im Auto durch user research vorgestellt. Wie aus Interviews mit täglichen Pendlern hervorgeht, sind die Passagiere um ihre körperliche Unversehrtheit besorgt, wenn sie mit räumlichen Konflikten zwischen grenzenlosen virtuellen Umgebungen und dem begrenzten Kabinenraum konfrontiert werden. Das Modell soll Forschern dabei helfen, Informationen und Benutzerschnittstellen räumlich zu organisieren, die in der Nähe des Benutzers variieren. Darüber hinaus zeigt ein Feldexperiment kontextbezogenes Feedback zur Reisekrankheit bei der Nutzung immersiver Technologien auf der Straße. Dies hilft, das Modell zu verfeinern und die folgenden Experimente zu instruieren. 2) Vermischung von Realitäten bei Autofahrten - Zweitens wird in dieser Arbeit anhand einer Reihe von Prototypen und Experimenten untersucht, wie die MR-Technologie im Auto es den Passagieren ermöglichen kann, sich in virtuellen Umgebungen präsent zu fühlen und gleichzeitig das Bewusstsein für die reale Umgebung zu behalten. Die Ergebnisse zeigen technische Lösungen für räumliche Beschränkungen und Situationsbewusstsein, indem wesentliche Elemente der realen Umgebung in VR integriert werden. Die empirischen Erkenntnisse bringen eine Reihe von Dimensionen in das Modell der MR im Auto ein, die die Designentscheidungen für gemischte Realitäten leiten. 3) Über den Verkehrskontext hinaus - Drittens erweitere ich das Modell auf andere Alltagskontexte jenseits des Verkehrs, in denen räumliche und soziale Zwänge herrschen, wie z.B. in einem begrenzten und gemeinsam genutzten Wohnbereich zu Hause. Eine Literaturrecherche konsolidiert die Nutzung von Alltagsgegenständen als haptisches Feedback für MR-Interaktion über räumliche Skalen hinweg. Ein Laborexperiment zeigt, wie kontextbewusste MR-Systeme, die physische Konfigurationen berücksichtigen, soziale Interaktion mit anderen Personen in engen gemeinsamen Räumen ermöglichen. Diese Ergebnisse belegen die Übertragbarkeit des MR-Modells im Auto auf andere Kontexte. Schließlich schließe ich mit einem ganzheitlichen Modell für mobile MR-Interaktion in alltäglichen Kontexten, von zu Hause bis unterwegs. Mit meiner user research, meinen Prototypen und Evaluierungsexperimenten sowie meinem Modell ebnet diese Dissertation den Weg für das Verständnis der zukünftigen Nutzung immersiver Technologien durch Passagiere, für die Überwindung der heutigen technischen Beschränkungen von MR in der mobilen Interaktion und schließlich für die Förderung des allgegenwärtigen Zugangs und der engen Verbindung der mobilen Nutzer zu MR jederzeit und überall in ihrem täglichen Leben

    Design and Development of a Multi-Sided Tabletop Augmented Reality 3D Display Coupled with Remote 3D Imaging Module

    Get PDF
    This paper proposes a tabletop augmented reality (AR) 3D display paired with a remote 3D image capture setup that can provide three-dimensional AR visualization of remote objects or persons in real-time. The front-side view is presented in stereo-3D format, while the left-side and right-side views are visualized in 2D format. Transparent glass surfaces are used to demonstrate the volumetric 3D augmentation of the captured object. The developed AR display prototype mainly consists of four 40 × 30 cm2 LCD panels, 54% partially reflective glass, an in-house developed housing assembly, and a processing unit. The capture setup consists of four 720p cameras to capture the front-side stereo view and both the left- and right-side views. The real-time remote operation is demonstrated by connecting the display and imaging units through the Internet. Various system characteristics, such as range of viewing angle, stereo crosstalk, polarization perseverance, frame rate, and amount of reflected and transmitted light through partially reflective glass, were examined. The demonstrated system provided 35% optical transparency and less than 4% stereo crosstalk within a viewing angle of ±20 degrees. An average frame rate of 7.5 frames per second was achieved when the resolution per view was 240 × 240 pixels
    • …
    corecore