23 research outputs found

    Model and Tools for Integrating IoT into Mixed Reality Environments: Towards a Virtual-Real Seamless Continuum

    Get PDF
    International audienceThis paper introduces a new software model and new tools for managing indoor smart environments (smart home, smart building , smart factories, etc.) thanks to MR technologies. Our fully-integrated solution is mainly based on a software modelization of connected objects used to manage them independently from their actual nature: these objects can be simulated or real. Based on this model our goal is to create a continuum between a real smart environment and its 3D digital twin in order to simulate and manipulate it. Therefore, two kinds of tools are introduced to leverage this model. First, we introduce two complementary tools, an AR and a VR one, for the creation of the digital twin of a given smart environment. Secondly, we propose 3D interactions and dedicated metaphors for the creation of automation scenarios in the same VR application. These scenarios are then converted to a Petri-net based model that can be edited later by expert users. Adjusting the parameters of our model allows to navigate on the continuum in order to use the digital twin for simulation, deployment and real/virtual synchronization purposes. These different contributions and their benefits are illustrated thanks to the automation configuration of a room in our lab

    Evaluating Usability and User Experience of AR Applications in VR Simulation

    Get PDF
    Validating an augmented reality application in a virtual reality simulation can offer many advantages compared to testing in real conditions and can speed up development processes. With such a simulation, developers and designers do not need to have constant physical access to the real place. They can save physical navigation, experiment with different kinds of devices and isolate testing parameters. While the validity of functional testing in virtual reality simulations is not particularly challenged, the validity of such simulations to evaluate user experience and usability, similarly as in real conditions, still needs to be assessed. We then conducted a user study to explore the validity of evaluating these criteria with a virtual reality simulation tool and the importance of simulation fidelity for that purpose. Indeed, we also seek to determine whether it is necessary to simulate the limited field of view of augmented reality glasses and if the simulation can take place in a virtual world that is not a replica of the real targeted environment. To do so, we have developed an augmented reality application for smart-homes where a user can interact with different connected objects. One group of users performed the experiment in the real place with augmented reality glasses and three other groups performed the same experiment in virtual reality with various simulation conditions (field of view and environment). Users’ subjective feedback and quantitative results only highlight very few differences between real-world conditions and simulation in virtual reality, whatever the simulation parameters used. These results suggest the interest in using virtual reality simulation to evaluate an augmented reality application but should be confirmed on other use cases and interaction tasks

    Redistribution et Plasticité pour les Interfaces Utilisateurs 3D : un Modèle Illustré

    Get PDF
    National audienceIn this paper we propose a model to handle redistribution for 3D user interfaces. Redistribution consists in changing the components distribution of an interactive system across different dimensions such as platform, display and user. Our work is based on previous models that ease the creation of 3D plastic user interfaces, interactive systems that can handle context of use modifications while preserving usability. We extended these models in order to include redistribution capabilities. The final solution lets developers create applications where 3D content and interaction tasks can be automatically redistributed across the different dimensions at runtime. The proposed redistribution process includes an automatic detection of these platforms and a meta-user interface to control the redistribution granularity. In order to illustrate this model, we describe three different scenarios of redistribution between a tablet and a CAVE for a 3D application. We show how redistribution can be used at runtime to combine these platforms, to switch seamlessly from one platform to another one and last how redistribution can be used to create a collaborative context of use

    Plasticité pour les Interfaces de Réalité Mixte

    No full text
    This PhD thesis focuses on plasticity for Mixed Reality (MR) User interfaces, which includesVirtual Reality (VR), Augmented Reality (AR) and Augmented Virtuality (AV) applications.Plasticity refers to the capacity of an interactive system to withstand variations of both the systemphysical characteristics and the environment while preserving its usability. Usability continuity ofa plastic interface is ensured whatever the context of use. Therefore, we propose a set of softwaremodels, integrated in a software solution named 3DPlasticToolkit, that allow any developer tocreate plastic MR user interfaces. First, we propose three models for modeling adaptation sources:a model for the description of display devices and interaction devices, a model for the descriptionof the users and their preferences, a model for the description of data structure and semantic.These adaptation sources are taken into account by an adaptation process that deploys applicationcomponents adapted to the context of use thanks to a scoring system. The deployment of theseapplication components lets the system adapt the interaction techniques of the application of itscontent presentation. We also propose a redistribution process that allows the end-user to changethe distribution of his/her application components across multiple dimensions: display, user andplatform. Thus, it allows the end-user to switch dynamically of platform or to combine multipleplatforms. The implementation of these models in 3DPlasticToolkit provides developers with aready to use solution for the development of plastic MR user interfaces. Indeed, the solution alreadyintegrates dierent display devices and interaction devices and also includes multiple interactiontechniques, visual eects and data visualization metaphors.Cette thèse s'intéresse à la plasticité des interfaces de Réalité Mixte (RM), c'est-à-dire les applicationsde Réalité Virtuelle (RV), Réalité Augmentée (RA) et de Virtualité Augmentée (AV). Laplasticité d'un système interactif est sa capacité à s'adapter aux contraintes matérielles et environnementalesdans le respect de son utilisabilité. La continuité de l'utilisabilité d'une interfaceplastique est assurée quel que soit le contexte d'usage. Nous proposons ainsi des modèles et unesolution logicielle nommée 3DPlasticToolkit an de permettre aux développeurs de créer des interfacesde réalité mixtes plastiques. Tout d'abord, nous proposons trois modèles pour modéliserles sources d'adaptation : un modèle pour représenter les dispositifs d'interaction et les dispositifsd'achage, un modèle pour représenter les utilisateurs et leurs préférences et un modèle pourreprésenter la structure et la sémantique des données. Ces sources d'adaptation vont être prisesen compte par un processus d'adaptation qui va déployer dans une application les composantsapplicatifs adaptés au contexte d'usage grâce à des mécanismes de notation. Le déploiement de cescomposants va permettre d'adapter à la fois les techniques d'interaction de l'application et égalementla présentation de son contenu. Nous proposons également un processus de redistributionqui va permettre à l'utilisateur nal de changer la distribution des composants de son systèmesur diérentes dimensions : achage, utilisateur et plateforme. Ce processus va ainsi permettre àl'utilisateur de changer de plateforme dynamiquement ou encore de combiner plusieurs plateformes.L'implémentation de ces modèles dans 3DPlasticToolkit permet de fournir aux développeurs unesolution prête à l'usage qui peut gérer les périphériques actuels de réalité mixte et qui inclut uncertain nombre de techniques d'interaction, d'eets visuels et de métaphores de visualisation dedonnées

    Tone mapping high dynamic 3D scenes with global lightness coherency

    Get PDF
    International audienceWe propose a new approach for real-time Tone Mapping Operator dedicated to High Dynamic Range rendering of interactive 3D scenes. The proposed method considers the whole scene lighting in order to preserve the global coherency. This is the major contribution of our method. Indeed, most of existing Tone Mapping Operators only consider the image rendered by the camera at the current frame and simulate the Human Visual System accommodation to bright and dark luminance. Consequently, after an adaptation time, the lighting design of the 3D scene is lost. For example, two rooms with a high contrast (one dark and one bright) can be perceived with the same luminance level after adaptation. To cope with this coherency issue, we adapt an existing Tone Mapping Operator that combines (1) a global Tone Mapping Operator which takes into account the High Dynamic Range of the whole scene and (2) a viewport frame based Tone Mapping Operator that enhances the image contrast. Thus, we preserve the global lighting coherency and enhance the contrast for each rendered image. Furthermore, we present a subjective evaluation that shows that our method provides a better user experience than previous methods in the case of visualization on Head-Mounted Display

    Virtual Reality Simulation for Multimodal and Ubiquitous System Deployment

    No full text
    International audienceMultimodal IoT-based Systems (MIBS) are ubiquitous systems that use various connected devices as interfaces of interaction. However, configuring and testing MIBS to ensure they correctly work in one's own environment is still challenging for most users: the trial and error process in situ is a tedious and time-consuming method. In this paper, we aim to simplify the installation process of MIBS. Thus, we propose a new VR methodology and a tool that allow the configuration and evaluation of MIBS thanks to realistic simulation. In our approach, users can easily test various devices, devices locations, and interaction techniques without prior knowledge or dependence on the environment and devices availability. Contrary to on-the-field experiments, there is no need to access the real environment and all the desired connected devices. Moreover, our solution includes feedback features to better understand and assess devices interactive capabilities according to their locations. Users can also easily create, collect and share their configurations and feedback to improve the MIBS, and to help its installation in the real environment. To demonstrate the relevance of our VR-based methodology, we compared it in a smart home with a tool following the same configuration process but on a desktop setup and with real devices. We show that users reached comparable configurations in VR and on-the-field experiments, but the whole configuration and evaluation process was performed faster in VR

    Interactive Multimodal System Characterization in the Internet of Things Context

    No full text
    Conférence Virtuelle en ligne, la ville et le pays seront à changer mais on est obligé d'en mettre dans le formulaire de saisieInternational audienceThe internet of things (IoT) is a chance to provide users with pervasive environments in which they can interact naturally with the environment. Multimodal interaction is the domain that provides this naturalness by using different senses to interact. However, the IoT context requires a specific process to create such multimodal systems. In this article, we investigate the process of creating multimodal systems with connected devices as interaction mediums, and provide an analysis of the existing tools to complete this process. We discuss tools that could be designed to support the creation process when the existing ones are not sufficient

    Managing Mutual Occlusions between Real and Virtual Entities in Virtual Reality

    Get PDF
    International audienceThis paper describes a mixed interactive system managing mutual occlusions between real and virtual objects displayed by virtual reality display wall environments. These displays are physically unable to manage mutual occlusions between real and virtual objects. A real occluder located between the user’s eyes and the wall hides virtual objects regardless of their depth. This problem confuses the user’s stereopsis of the virtual environment, harming its user experience. For this reason, we present a mixed interactive system combining a stereoscopic optical see-through head-mounted display with a static stereoscopic display in order to manage mutual occlusions and enhance direct user interactions with virtual content. We illustrate our solution with a use case and an experiment proposal

    Unified Model and Framework for Interactive Mixed Entity Systems

    No full text
    International audienceMixed reality, natural user interfaces, and the internet of things converge towards an advanced sort of interactive system. These systems enable new forms of interactivity, allowing intuitive user interactions with ubiquitous services in mixed environments. However, they require to synchronize multiple platforms and various technologies. Their heterogeneity makes them complex, and sparsely interoperable or extensible. Therefore, designers and developers require new models, tools, and methodologies to support their creation. We present a unified model of the entities composing these systems, breaking them down into graphs of mixed entities. This model decorrelates real and virtual but still describes their interplay. It characterizes and classifies both the external and internal interactions of mixed entities. We also present a design and implementation framework based on our unified model. Our framework takes advantage of our model to simplify, accelerate, and unify the production of these systems. We showcase the use of our framework by designers and developers in the case of a smart building management system

    D3PART: A new Model for Redistribution and Plasticity of 3D User Interfaces

    Get PDF
    International audienceIn this paper we propose D3PART (Dynamic 3D Plastic And Redistribuable Technology), a model to handle redistribution for 3D user interfaces. Redistribution consists in changing the components distribution of an interactive system across different dimensions such as platform, display and user. We extend previous plasticity models with redistribution capabilities, which lets developers create applications where 3D content and interaction tasks can be automatically redistributed across the different dimensions at runtime
    corecore