7 research outputs found
MIME: A mixed-space collaborative system with three immersion levels and multiple users
Shared spaces for remote collaboration are nowadays possible by considering a variety of users, devices, immersion systems, interaction capabilities, navigation paradigms, etc. There is a substantial amount of research done in this line, proposing different solutions. However, still a more general solution that considers the heterogeneity of the involved actors/items is lacking. In this paper, we present MIME, a mixed-space tri-collaborative system. Differently from other mixed-space systems, MIME considers three different types of users (in different locations) according to the level of immersion in the system, who can interact simultaneously â what we call a tri-collaboration. For the three types, we provide a solution to navigate, point at objects/locations and make annotations, while users are able to see a virtual representation of the rest of users. Additionally, the total number of users that can simultaneously interact with the system is only restricted by the available hardware, i.e., various users of the same type can be simultaneously connected to the system. We have conducted a preliminary study at the laboratory level, showing that MIME is a promising tool that can be used in many real cases for different purposes.Shared spaces for remote collaboration are nowadays possible by considering a variety of users, devices, immersion systems, interaction capabilities, navigation paradigms, etc. There is a substantial amount of research done in this line, proposing different solutions. However, still a more general solution that considers the heterogeneity of the involved actors/items is lacking. In this paper, we present MIME, a mixed-space tri-collaborative system. Differently from other mixed-space systems, MIME considers three different types of users (in different locations) according to the level of immersion in the system, who can interact simultaneously â what we call a tri-collaboration. For the three types, we provide a solution to navigate, point at objects/locations and make annotations, while users are able to see a virtual representation of the rest of users. Additionally, the total number of users that can simultaneously interact with the system is only restricted by the available hardware, i.e., various users of the same type can be simultaneously connected to the system. We have conducted a preliminary study at the laboratory level, showing that MIME is a promising tool that can be used in many real cases for different purposes
On a First Evaluation of ROMOTâA RObotic 3D MOvie TheatreâFor Driving Safety Awareness
In this paper, we introduce ROMOT, a RObotic 3D-MOvie Theatre, and present a case study related to driving safety. ROMOT is built with a robotic motion platform, includes multimodal devices, and supports audience-film interaction. We show the versatility of the system by means of different types of system setups and generated content that includes a first-person movie and others involving the technologies of virtual, augmented, and mixed realities. Finally, we present the results of some preliminary user tests made at the laboratory level, including the system usability scale. They give satisfactory scores for the usability of the system and the individualâs satisfaction
A Collaborative Augmented Reality Annotation Tool for the Inspection of Prefabricated Buildings
The inspection of prefabricated buildings involves different stages and tasks such as the collection of measurements, the visual inspection of components and the written annotation of defects. Traditionally, inspectors have documented the process, the kind of defects and the proposed correction measures in paper format, hindering the collaboration with other experts (either simultaneously or asynchronously) and the collection of other types of annotations (e.g. images, 3D elements). In this paper, we present an AR tool designed to aid inspectors during this process. The tool has many benefits, as it allows simultaneously performing a collaborative inspection, taking multitype and geolocated annotations, their monitoring and edition, and performing in situ augmented visualizations. The quantitative and qualitative user evaluation carried out with our tool in a real environment (including usability and satisfaction evaluations) shows the relevance that such a technology might bring to the field and prove that our tool is usable and fulfils most of the inspectors' expectations
TinajAR: An Edutainment Augmented Reality Mirror for the Dissemination and Reinterpretation of Cultural Heritage
The use of augmented reality (AR) in cultural heritage (CH) applications opens a whole set of possibilities, including the virtual transformation of CH elements. This paper presents TinajAR, a mirror-based AR application designed to serve both as an edutainment application in the field of CH and also as an artistic expression. As an edutainment application, TinajAR features a multi-marker video-based AR application designed to show virtual ceramic pieces and explain the pottery process through virtual avatars. As an artistic expression, TinajAR seeks to reinterpret an ancient type of cellar called calado, which was used in the past for storing wine in northern Spain. The reinterpretation consists in giving a different but meaningful use to the space. TinajAR was used by around 1800 people during a ceramics exhibition in La Rioja, Spain and was assessed at the satisfaction level with 56 users by means of a system usability scale, giving very satisfactory results
Cross-Device Augmented Reality Annotations Method for Asynchronous Collaboration in Unprepared Environments
Augmented Reality (AR) annotations are a powerful way of communication when collaborators cannot be present at the same time in a given environment. However, this situation presents several challenges, for example: how to record the AR annotations for later consumption, how to align virtual and real world in unprepared environments or how to offer the annotations to users with different AR devices. In this paper we present a cross-device AR annotation method that allows users to create and display annotations asynchronously in environments without the need for prior preparation (AR markers, point cloud capture, etc.). This is achieved through an easy user-assisted calibration process and a data model that allows any type of annotation to be stored on any device. The experimental study carried out with 40 participants has verified our two hypotheses: we are able to visualize AR annotations in indoor environments without prior preparation regardless of the device used and the overall usability of the system is satisfactory