1,840 research outputs found
A Collaborative Augmented Reality Framework Based on Distributed Visual Slam
Visual Simultaneous Localization and Mapping (SLAM) has been used for markerless tracking in augmented reality applications. Distributed SLAM helps multiple agents to collaboratively explore and build a global map of the environment while estimating their locations in it. One of the main challenges in Distributed SLAM is to identify local map overlaps of these agents, especially when their initial relative positions are not known. We developed a collaborative AR framework with freely moving agents having no knowledge of their initial relative positions. Each agent in our framework uses a camera as the only input device for its SLAM process. Furthermore, the framework identifies map overlaps of agents using an appearance-based method
Distributed monocular visual SLAM as a basis for a collaborative augmented reality framework
Visual Simultaneous Localization and Mapping (SLAM) has been used for markerless tracking in augmented reality applications. Distributed SLAM helps multiple agents to collaboratively explore and build a global map of the environment while estimating their locations in it. One of the main challenges in distributed SLAM is to identify local map overlaps of these agents, especially when their initial relative positions are not known. We developed a collaborative AR framework with freely moving agents having no knowledge of their initial relative positions. Each agent in our framework uses a camera as the only input device for its SLAM process. Furthermore, the framework identifies map overlaps of agents using an appearance-based method. We also proposed a quality measure to determine the best keypoint detector/descriptor combination for our framework
AFFECTIVE COMPUTING AND AUGMENTED REALITY FOR CAR DRIVING SIMULATORS
Car simulators are essential for training and for analyzing the behavior, the responses and the performance of the driver. Augmented Reality (AR) is the technology that enables virtual images to be overlaid on views of the real world. Affective Computing (AC) is the technology that helps reading emotions by means of computer systems, by analyzing body gestures, facial expressions, speech and physiological signals. The key aspect of the research relies on investigating novel interfaces that help building situational awareness and emotional awareness, to enable affect-driven remote collaboration in AR for car driving simulators. The problem addressed relates to the question about how to build situational awareness (using AR technology) and emotional awareness (by AC technology), and how to integrate these two distinct technologies [4], into a unique affective framework for training, in a car driving simulator
Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence
Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe
CP-SLAM: Collaborative Neural Point-based SLAM System
This paper presents a collaborative implicit neural simultaneous localization
and mapping (SLAM) system with RGB-D image sequences, which consists of
complete front-end and back-end modules including odometry, loop detection,
sub-map fusion, and global refinement. In order to enable all these modules in
a unified framework, we propose a novel neural point based 3D scene
representation in which each point maintains a learnable neural feature for
scene encoding and is associated with a certain keyframe. Moreover, a
distributed-to-centralized learning strategy is proposed for the collaborative
implicit SLAM to improve consistency and cooperation. A novel global
optimization framework is also proposed to improve the system accuracy like
traditional bundle adjustment. Experiments on various datasets demonstrate the
superiority of the proposed method in both camera tracking and mapping.Comment: Accepted at NeurIPS 202
Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape
Motivated by the tremendous progress we witnessed in recent years, this paper
presents a survey of the scientific literature on the topic of Collaborative
Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM.
With fleets of self-driving cars on the horizon and the rise of multi-robot
systems in industrial applications, we believe that Collaborative SLAM will
soon become a cornerstone of future robotic applications. In this survey, we
introduce the basic concepts of C-SLAM and present a thorough literature
review. We also outline the major challenges and limitations of C-SLAM in terms
of robustness, communication, and resource management. We conclude by exploring
the area's current trends and promising research avenues.Comment: 44 pages, 3 figure
- …