42 research outputs found

    A Survey on Augmented Reality Challenges and Tracking

    Get PDF
    This survey paper presents a classification of different challenges and tracking techniques in the field of augmented reality. The challenges in augmented reality are categorized into performance challenges, alignment challenges, interaction challenges, mobility/portability challenges and visualization challenges. Augmented reality tracking techniques are mainly divided into sensor-based tracking, visionbased tracking and hybrid tracking. The sensor-based tracking is further divided into optical tracking, magnetic tracking, acoustic tracking, inertial tracking or any combination of these to form hybrid sensors tracking. Similarly, the vision-based tracking is divided into marker-based tracking and markerless tracking. Each tracking technique has its advantages and limitations. Hybrid tracking provides a robust and accurate tracking but it involves financial and tehnical difficulties

    Digital-Twins-Based Internet of Robotic Things for Remote Health Monitoring of COVID-19 Patients

    Get PDF
    The deadly coronavirus disease (COVID-19) has highlighted the importance of remote health monitoring (RHM). The digital-twins (DTs) paradigm enables RHM by creating a virtual replica that receives data from the physical asset, representing its real-world behavior. However, DTs use passive Internet of Things (IoT) sensors, which limit their potential to a specific location or entity. This problem can be addressed by using the Internet of Robotic Things (IoRT), which combines robotics and IoT, allowing the robotic things (RTs) to navigate in a particular environment and connect to IoT devices in the vicinity. Implementing DTs in IoRT, creates a virtual replica [virtual twin (VT)] that receives real-time data from the physical RT [physical twin (PT)] to mirror its status. However, DTs require a user interface for real-time interaction and visualization. Virtual reality (VR) can be used as an interface due to its natural ability to visualize and interact with DTs. This research proposes a real-time system for RHM of COVID-19 patients using the DTs-based IoRT and VR-based user interface. It also presents and evaluates robot navigation performance, which is vital for remote monitoring. The VT operates the PT in the real environment (RE), which collects data from the patient-mounted sensors and transmits it to the control service to visualize in VR for medical examination. The system prevents direct interaction of medical staff with contaminated patients, protecting them from infection and stress. The experimental results verify the monitoring data quality (accuracy, completeness, and timeliness) and high accuracy of PT's navigation.- Qatar National Library - Qatar University Internal Gran

    A Subjective Study on the Effects of Dynamic Virtual Chemistry Laboratory in a Secondary School Education

    Get PDF
    Virtual chemistry laboratories (VCLs) are the alternative solutions of the physical laboratories, where students can virtually conduct their experiments with a lower cost, and in an efficient and safer way. Considering the importance of technology-enhanced learning and that of the experimental study, several VCLs have been proposed. However, the existing VCLs are static and only provide the simulation of pre-defined experiments, procedures, or safety procedures and cannot be adapted according to the students’ level or new experimental tasks. In this paper, we proposed a dynamic virtual chemistry lab (DVCL) where instructors or experts are allowed to add a new chemical experiment by adding its apparatus, chemicals, glassware, and mechanism or add something new to its properties. We conducted a subjective study with field experts to investigate the effect of proposed DVCL in secondary school chemistry education. During evaluation, twenty-seven field experts were participated and evaluated the proposed DVCL with system usability scale (SUS)-questionnaire and by a simple questionnaire. The results showed that the proposed DVCL is very helpful for students’ performance and mental modeling and also for effortlessly uplifting their knowledge for hands-on experiments

    Assistance multimodale pour l'interaction 3D collaborative : étude et analyse des performances pour le travail collaboratif

    No full text
    The recent advancement in the field of high quality computer graphics and the capability of inexpensive computers to render realistic 3D scenes have made it possible to develop virtual environments where two or more users can co-exist and work collaboratively to achieve a common goal. Such environments are called Collaborative Virtual Environments (CVEs). The potential application domains of CVEs are many, such as military, medical, assembling, computer aided designing, teleoperation, education, games and social networks etc.. One of the problems related to CVEs is the user's low level of awareness about the status, actions and intentions of his/her collaborator, which not only reduces users' performance but also leads to non satisfactory results. In addition, collaborative tasks without using any proper computer generated assistance are very dicult to perform and are more prone to errors. The basic theme of this thesis is to provide assistance in collaborative 3D interaction in CVEs. In this context, we study and develop the concept of multimodal (audio, visual and haptic) assistance of a user or group of users. Our study focuses on how we can assist users to collaboratively interact with the entities of CVEs. We propose here to study and analyze the contribution of multimodal assistance in collaborative (synchronous and asynchronous) interaction with objects in the virtual environment. Indeed, we propose and implement various multimodal virtual guides. These guides are evaluated through a series of experiments where selection/manipulation task is carried out by users both in synchronous and asynchronous mode. The experiments were carried out in LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) lab at University of Angers and IBISC (Informatique, Biologie Integrative et Systemes Complexes) lab at University of Evry. In these experiments users were asked to perform a task under various conditions ( with and without guides). Analysis was done on the basis of task completion time, errors and users' learning. For subjective evaluations questionnaires were used. The ndings of this research work can contribute to the development of collaborative systems for teleoperation, assembly tasks, e-learning, rehabilitation, computer aided design and entertainment.Les progrès récents dans le domaine de l'infographie et la capacité des ordinateurs personnels de rendre les scènes 3D réalistes ont permis de développer des environnements virtuels dans lesquels plusieurs utilisateurs peuvent co-exister et travailler ensemble pour atteindre un objectif commun. Ces environnements sont appelés Environnements Virtuels Collaboratifs (EVCs). Les applications potentielles des EVCs sont dans les domaines militaire, medical, l'assemblage, la conception assistee par ordinateur, la teleoperation, l'éducation, les jeux et les réseaux sociaux. Un des problemes liés aux EVCs est la faible connaissance des utilisateurs concernant l'état, les actions et les intentions de leur(s) collaborateur(s). Ceci reduit non seulement la performance collective, mais conduit également à des résultats non satisfaisants. En outre, les tâches collaboratives ou coopératives réalisées sans aide ou assistance, sont plus difficiles et plus sujettes aux erreurs. Dans ce travail de thèse, nous étudions l'influence de guides multi-modaux sur la performance des utilisateurs lors de tâches collaboratives en environnement virtuel (EV). Nous proposons un certain nombre de guides basés sur les modalites visuelle, auditive et haptique. Dans ce contexte, nous étudions leur qualité de guidage et examinons leur influence sur l'awareness, la co-presence et la coordination des utilisateurs pendant la réalisation des tâches. A cette effet, nous avons développé une architecture logicielle qui permet la collaboration de deux (peut être étendue à plusieurs utiliateurs) utilisateurs (distribués ou co-localisés). En utilisant cette architecture, nous avons développé des applications qui non seulement permettent un travail collaboratif, mais fournissent aussi des assistances multi-modales aux utilisateurs. Le travail de collaboration soutenu par ces applications comprend des tâches de type "Peg-in-hole", de télé-manipulation coopérative via deux robots, de télé-guidage pour l'écriture ou le dessin. Afin d'évaluer la pertinence et l'influence des guides proposés, une série d'expériences a ete effectuée au LISA (Laboratoire d'Ingénierie et Systemes Automatisés) à l'Université d'Angers et au Laboratoire IBISC (Informatique, Biologie Integrative et Systemes Complexes) à l'Université d'Evry. Dans ces expériences, les utilisateurs ont été invités à effectuer des tâches variées, dans des conditions différentes (avec et sans guides). L'analyse a été effectuée sur la base du temps de réalisation des tâches, des erreurs et de l'apprentissage des utilisateurs. Pour les évaluations subjectives des questionnaires ont été utilisés. Ce travail contribue de manière signicative au développement de systèmes collaboratifs pour la téléoperation, la simulation d'assemblage, l'apprentissage de gestes techniques, la rééducation, la conception assistée par ordinateur et le divertissement

    Assistance multimodale pour l'interaction 3D collaboratif : étude de la performance pour le travail collaboratif

    No full text
    Les progrès récents dans le domaine de l'infographie et la capacité des ordinateurs personnels de rendre les scènes 3D réalistes ont permis de développer des environnements virtuels dans lesquels plusieurs utilisateurs peuvent co-exister et travailler ensemble pour atteindre un objectif commun. Ces environnements sont appelés Environnements Virtuels Collaboratifs (EVCs). Les applications potentielles des EVCs sont dans les domaines militaire, médical, l'assemblage, la conception assistée par ordinateur, la téléopération, l'éducation, les jeux et les réseaux sociaux. Un des problèmes liés aux EVCs est la faible connaissance des utilisateurs concernant l'état, les actions et les intentions de leur(s) collaborateur(s). Ceci réduit non seulement la performance collective, mais conduit également à des résultats non satisfaisants. En outre, les tâches collaboratives ou coopératives réalisées sans aide ou assistance, sont plus difficiles et plus sujettes aux erreurs. Dans ce travail de thèse, nous étudions l'influence de guides multi-modaux sur la performance des utilisateurs lors de tâches collaboratives en environnement virtuel (EV). Nous proposons un certain nombre de guides basés sur les modalités visuelle, auditive et haptique. Dans ce contexte, nous étudions leur qualité de guidage et examinons leur influence sur l'awareness, la co-présence et la coordination des utilisateurs pendant la réalisation des tâches. A cette fin, nous avons développé une architecture logicielle qui permet la collaboration de deux (peut être étendue a plusieurs utilisateurs) utilisateurs (distribués ou co-localisés). En utilisant cette architecture, nous avons développé des applications qui non seulement permettent un travail collaboratif, mais fournissent aussi des assistances multi-modales aux utilisateurs. Le travail de collaboration soutenus par ces applications comprend des tâches de type "Peg-in-hole", de télé-manipulation coopérative via deux robots, de télé-guidage pour l'écriture ou le dessin. Afin d'évaluer la pertinence et l'influence des guides proposés, une série d'expériences a été effectuée au LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) à l'Université d'Angers et au Laboratoire IBISC (Informatique, Biologie Intégrative et Systèmes Complexes) d'Evry. Dans ces expériences, les utilisateurs ont été invités à effectuer des tâches variées, dans des conditions différentes (avec et sans guides). L'analyse a été effectuée sur la base du temps de réalisation des tâches, des erreurs et de l'apprentissage des utilisateurs. Pour les évaluations subjectives des questionnaires ont été utilisés. Ce travail contribue de manière significative au développement de systèmes collaboratifs pour la téléopération, la simulation d'assemblage, l'apprentissage de gestes techniques, la rééducation, la conception assistée par ordinateur et le divertissement.The recent advancement in the field oh high quality computer graphics and the capability of inexpensive computers to render realistic 3D scenes have made it possible to develop virtual environments where two more users can co-exist and work collaboratively to achieve a common goal. Such environments are called Collaborative Virtual Environnment (CVEs). The potential application domains of CVEs are many, such as military, medical, assembling, computer aided designing, teleoperation, education, games and social networks etc.. One of the problems related to CVEs is the user's low level of awareness about the status, actions and intentions of his/her collaborator, which not only reduces user's performance but also leads to non satisfactory results. In addition, collaborative tasks without using any proper computer generated assistance are very difficult to perform and are more prone to errors. The basic theme of this thesis is to provide assistance in collaborative 3D interactiion in CVEs. In this context, we study and develop the concept of multimodal (audio, visual and haptic) assistance of a user or group of users. Our study focuses on how we can assist users to collaboratively interact with the entities of CVEs. We propose here to study and analyze the contribution of multimodal assistance in collaborative (synchronous and asynchronous) interaction with objects in the virtual environment. Indeed, we propose and implement various multimodal virtual guides. Theses guides are evaluated through a series of experiments where selection/manipulation task is carried out by users both in synchronous and asynchronous mode. The experiments were carried out in LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) lat at University of Angers and IBISC (Informatique, Biologie Intégrative et Systèmes complexes) lab at University of Evry. In these experiments users were asked to perform a task under various conditions (with and without guides). Analysis was done on the basis of task completion time, errors and users' learning. For subjective evaluations questionnaires were used. The findings of this research work can contribute to the development of collaborative systems for teleopreation, assembly tasks, e-learning, rehabilitation, computer aided design and entertainment

    Étude et évaluation de l'interaction à retour d'effort de type SPIDAR

    No full text
    Mémoire de MASTER 2 Réalité Virtuelle et Systèmes Intelligent

    A Generic Approach toward Indoor Navigation and Pathfinding with Robust Marker Tracking

    Get PDF
    Indoor navigation and localization has gained a key attention of the researchers in the recent decades. Various technologies such as WiFi, Bluetooth, Ultra Wideband (UWB), and Radio-frequency identification (RFID) have been used for indoor navigation and localization. However, most of these existing methods often fail in providing a reasonable solution to the key challenges such as implementation cost, accuracy and extendibility. In this paper, we proposed a low-cost, and extendable framework for indoor navigation. We used simple markers printed on the paper, and placed on ceilings of the building. These markers are detected by a smartphone’s camera, and the audio and visual information associated with these markers are used as a user guidance. The system finds shortest path between any two arbitrary nodes for user navigation. In addition, it is extendable having the capability to cover new sections by installing new nodes at any place in the building. The system can be used for guidance of the blind people, tourists and new visitors in an indoor environment. The evaluation results reveal that the proposed system can guide users toward their destination in an efficient and accurate manne

    Assistance multimodale pour l'interaction 3D collaboratif (étude de la performance pour le travail collaboratif)

    No full text
    Les progrès récents dans le domaine de l'infographie et la capacité des ordinateurs personnels de rendre les scènes 3D réalistes ont permis de développer des environnements virtuels dans lesquels plusieurs utilisateurs peuvent co-exister et travailler ensemble pour atteindre un objectif commun. Ces environnements sont appelés Environnements Virtuels Collaboratifs (EVCs). Les applications potentielles des EVCs sont dans les domaines militaire, médical, l'assemblage, la conception assistée par ordinateur, la téléopération, l'éducation, les jeux et les réseaux sociaux. Un des problèmes liés aux EVCs est la faible connaissance des utilisateurs concernant l'état, les actions et les intentions de leur(s) collaborateur(s). Ceci réduit non seulement la performance collective, mais conduit également à des résultats non satisfaisants. En outre, les tâches collaboratives ou coopératives réalisées sans aide ou assistance, sont plus difficiles et plus sujettes aux erreurs. Dans ce travail de thèse, nous étudions l'influence de guides multi-modaux sur la performance des utilisateurs lors de tâches collaboratives en environnement virtuel (EV). Nous proposons un certain nombre de guides basés sur les modalités visuelle, auditive et haptique. Dans ce contexte, nous étudions leur qualité de guidage et examinons leur influence sur l'awareness, la co-présence et la coordination des utilisateurs pendant la réalisation des tâches. A cette fin, nous avons développé une architecture logicielle qui permet la collaboration de deux (peut être étendue a plusieurs utilisateurs) utilisateurs (distribués ou co-localisés). En utilisant cette architecture, nous avons développé des applications qui non seulement permettent un travail collaboratif, mais fournissent aussi des assistances multi-modales aux utilisateurs. Le travail de collaboration soutenus par ces applications comprend des tâches de type "Peg-in-hole", de télé-manipulation coopérative via deux robots, de télé-guidage pour l'écriture ou le dessin. Afin d'évaluer la pertinence et l'influence des guides proposés, une série d'expériences a été effectuée au LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) à l'Université d'Angers et au Laboratoire IBISC (Informatique, Biologie Intégrative et Systèmes Complexes) d'Evry. Dans ces expériences, les utilisateurs ont été invités à effectuer des tâches variées, dans des conditions différentes (avec et sans guides). L'analyse a été effectuée sur la base du temps de réalisation des tâches, des erreurs et de l'apprentissage des utilisateurs. Pour les évaluations subjectives des questionnaires ont été utilisés. Ce travail contribue de manière significative au développement de systèmes collaboratifs pour la téléopération, la simulation d'assemblage, l'apprentissage de gestes techniques, la rééducation, la conception assistée par ordinateur et le divertissement.The recent advancement in the field oh high quality computer graphics and the capability of inexpensive computers to render realistic 3D scenes have made it possible to develop virtual environments where two more users can co-exist and work collaboratively to achieve a common goal. Such environments are called Collaborative Virtual Environnment (CVEs). The potential application domains of CVEs are many, such as military, medical, assembling, computer aided designing, teleoperation, education, games and social networks etc.. One of the problems related to CVEs is the user's low level of awareness about the status, actions and intentions of his/her collaborator, which not only reduces user's performance but also leads to non satisfactory results. In addition, collaborative tasks without using any proper computer generated assistance are very difficult to perform and are more prone to errors. The basic theme of this thesis is to provide assistance in collaborative 3D interactiion in CVEs. In this context, we study and develop the concept of multimodal (audio, visual and haptic) assistance of a user or group of users. Our study focuses on how we can assist users to collaboratively interact with the entities of CVEs. We propose here to study and analyze the contribution of multimodal assistance in collaborative (synchronous and asynchronous) interaction with objects in the virtual environment. Indeed, we propose and implement various multimodal virtual guides. Theses guides are evaluated through a series of experiments where selection/manipulation task is carried out by users both in synchronous and asynchronous mode. The experiments were carried out in LISA (Laboratoire d'Ingénierie et Systèmes Automatisés) lat at University of Angers and IBISC (Informatique, Biologie Intégrative et Systèmes complexes) lab at University of Evry. In these experiments users were asked to perform a task under various conditions (with and without guides). Analysis was done on the basis of task completion time, errors and users' learning. For subjective evaluations questionnaires were used. The findings of this research work can contribute to the development of collaborative systems for teleopreation, assembly tasks, e-learning, rehabilitation, computer aided design and entertainment.EVRY-Bib. électronique (912289901) / SudocSudocFranceF

    EVEN-VE: Eyes Visibility Based Egocentric Navigation for Virtual Environments

    No full text
    Navigation is one of the 3D interactions often needed to interact with a synthetic world. The latest advancements in image processing have made possible gesture based interaction with a virtual world. However, the speed with which a 3D virtual world responds to a user’s gesture is far greater than posing of the gesture itself. To incorporate faster and natural postures in the realm of Virtual Environment (VE), this paper presents a novel eyes-based interaction technique for navigation and panning. Dynamic wavering and positioning of eyes are deemed as interaction instructions by the system. The opening of eyes preceded by closing for a distinct time-threshold, activates forward or backward navigation. Supporting 2-Degree of Freedom head’s gestures (Rolling and Pitching) panning is performed over the xy-plane. The proposed technique was implemented in a case-study project; EWI (Eyes Wavering based Interaction). With EWI, real time detection and tracking of eyes are performed by the libraries of OpenCV at the backend. To interactively follow trajectory of both the eyes, dynamic mapping is performed in OpenGL. The technique was evaluated in two separate sessions by a total of 28 users to assess accuracy, speed and suitability of the system in Virtual Reality (VR). Using an ordinary camera, an average accuracy of 91% was achieved. However, assessment made by using a high quality camera testified that accuracy of the system could be raised to a higher level besides increase in navigation speed. Results of the unbiased statistical evaluations suggest/demonstrate applicability of the system in the emerging domains of virtual and augmented realities

    GIFT: Gesture-Based Interaction by Fingers Tracking, an Interaction Technique for Virtual Environment

    No full text
    Three Dimensional (3D) interaction is the plausible human interaction inside a Virtual Environment (VE). The rise of the Virtual Reality (VR) applications in various domains demands for a feasible 3D interface. Ensuring immersivity in a virtual space, this paper presents an interaction technique where manipulation is performed by the perceptive gestures of the two dominant fingers; thumb and index. The two fingertip-thimbles made of paper are used to trace states and positions of the fingers by an ordinary camera. Based on the positions of the fingers, the basic interaction tasks; selection, scaling, rotation, translation and navigation are performed by intuitive gestures of the fingers. Without keeping a gestural database, the features-free detection of the fingers guarantees speedier interactions. Moreover, the system is user-independent and depends neither on the size nor on the color of the users’ hand. With a case-study project; Interactions by the Gestures of Fingers (IGF) the technique is implemented for evaluation. The IGF application traces gestures of the fingers using the libraries of OpenCV at the back-end. At the front-end, the objects of the VE are rendered accordingly using the Open Graphics Library; OpenGL. The system is assessed in a moderate lighting condition by a group of 15 users. Furthermore, usability of the technique is investigated in games. Outcomes of the evaluations revealed that the approach is suitable for VR applications both in terms of cost and accuracy
    corecore