1,942 research outputs found
Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms
The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent âdevicesâ, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew âcognitive devicesâ are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications
Long-Term Localization using Semantic Cues in Floor Plan Maps
Lifelong localization in a given map is an essential capability for
autonomous service robots. In this paper, we consider the task of long-term
localization in a changing indoor environment given sparse CAD floor plans. The
commonly used pre-built maps from the robot sensors may increase the cost and
time of deployment. Furthermore, their detailed nature requires that they are
updated when significant changes occur. We address the difficulty of
localization when the correspondence between the map and the observations is
low due to the sparsity of the CAD map and the changing environment. To
overcome both challenges, we propose to exploit semantic cues that are commonly
present in human-oriented spaces. These semantic cues can be detected using RGB
cameras by utilizing object detection, and are matched against an
easy-to-update, abstract semantic map. The semantic information is integrated
into a Monte Carlo localization framework using a particle filter that operates
on 2D LiDAR scans and camera data. We provide a long-term localization solution
and a semantic map format, for environments that undergo changes to their
interior structure and detailed geometric maps are not available. We evaluate
our localization framework on multiple challenging indoor scenarios in an
office environment, taken weeks apart. The experiments suggest that our
approach is robust to structural changes and can run on an onboard computer. We
released the open source implementation of our approach written in C++ together
with a ROS wrapper.Comment: Under review for RA-
S-Nav: Semantic-Geometric Planning for Mobile Robots
Path planning is a basic capability of autonomous mobile robots. Former
approaches in path planning exploit only the given geometric information from
the environment without leveraging the inherent semantics within the
environment. The recently presented S-Graphs constructs 3D situational graphs
incorporating geometric, semantic, and relational aspects between the elements
to improve the overall scene understanding and the localization of the robot.
But these works do not exploit the underlying semantic graphs for improving the
path planning for mobile robots. To that aim, in this paper, we present S-Nav a
novel semantic-geometric path planner for mobile robots. It leverages S-Graphs
to enable fast and robust hierarchical high-level planning in complex indoor
environments. The hierarchical architecture of S-Nav adds a novel semantic
search on top of a traditional geometric planner as well as precise map
reconstruction from S-Graphs to improve planning speed, robustness, and path
quality. We demonstrate improved results of S-Nav in a synthetic environment.Comment: 6 pages, 4 figure
S-Graphs+: Real-time Localization and Mapping leveraging Hierarchical Representations
In this paper, we present an evolved version of the Situational Graphs, which
jointly models in a single optimizable factor graph, a SLAM graph, as a set of
robot keyframes, containing its associated measurements and robot poses, and a
3D scene graph, as a high-level representation of the environment that encodes
its different geometric elements with semantic attributes and the relational
information between those elements. Our proposed S-Graphs+ is a novel
four-layered factor graph that includes: (1) a keyframes layer with robot pose
estimates, (2) a walls layer representing wall surfaces, (3) a rooms layer
encompassing sets of wall planes, and (4) a floors layer gathering the rooms
within a given floor level. The above graph is optimized in real-time to obtain
a robust and accurate estimate of the robot's pose and its map, simultaneously
constructing and leveraging the high-level information of the environment. To
extract such high-level information, we present novel room and floor
segmentation algorithms utilizing the mapped wall planes and free-space
clusters. We tested S-Graphs+ on multiple datasets including, simulations of
distinct indoor environments, on real datasets captured over several
construction sites and office environments, and on a real public dataset of
indoor office environments. S-Graphs+ outperforms relevant baselines in the
majority of the datasets while extending the robot situational awareness by a
four-layered scene model. Moreover, we make the algorithm available as a docker
file.Comment: 8 Pages, 7 Figures, 3 Table
Enriching BIM models with fire safety equipment using keypoint-based symbol detection in escape plans
In the context of fire safety inspections, Building Information Modeling (BIM) models enriched with Fire Safety Equipment (FSE) components can be used to complete compliance checks and other analyses. However, BIM models often lack the required FSE information. To address this issue, escape plans are a convenient source of data, as they show the position and type of FSE on floor plans. Therefore, this study proposes an automated method to analyze escape plans and extract FSE component information to enrich existing BIM models. The method employs the deep learning model Keypoint R-CNN for symbol detection. Symbol locations are then translated into physical positions within the BIM model. Through a real-building case study, the method demonstrates promising results. Future research may focus on improving the symbol detection performance and the registration between the BIM models and fire escape plans, as well as utilizing the extracted information for actual fire safety analyses
Automated 3D model generation for urban environments [online]
Abstract
In this thesis, we present a fast approach to automated
generation of textured 3D city models with both high details at
ground level and complete coverage for birds-eye view.
A ground-based facade model is acquired by driving a vehicle
equipped with two 2D laser scanners and a digital camera under
normal traffic conditions on public roads. One scanner is
mounted horizontally and is used to determine the approximate
component of relative motion along the movement of the
acquisition vehicle via scan matching; the obtained relative
motion estimates are concatenated to form an initial path.
Assuming that features such as buildings are visible from both
ground-based and airborne view, this initial path is globally
corrected by Monte-Carlo Localization techniques using an aerial
photograph or a Digital Surface Model as a global map. The
second scanner is mounted vertically and is used to capture the
3D shape of the building facades. Applying a series of automated
processing steps, a texture-mapped 3D facade model is
reconstructed from the vertical laser scans and the camera
images. In order to obtain an airborne model containing the roof
and terrain shape complementary to the facade model, a Digital
Surface Model is created from airborne laser scans, then
triangulated, and finally texturemapped with aerial imagery.
Finally, the facade model and the airborne model are fused
to one single model usable for both walk- and fly-thrus. The
developed algorithms are evaluated on a large data set acquired
in downtown Berkeley, and the results are shown and discussed
- âŠ