486 research outputs found
Robotic Wireless Sensor Networks
In this chapter, we present a literature survey of an emerging, cutting-edge,
and multi-disciplinary field of research at the intersection of Robotics and
Wireless Sensor Networks (WSN) which we refer to as Robotic Wireless Sensor
Networks (RWSN). We define a RWSN as an autonomous networked multi-robot system
that aims to achieve certain sensing goals while meeting and maintaining
certain communication performance requirements, through cooperative control,
learning and adaptation. While both of the component areas, i.e., Robotics and
WSN, are very well-known and well-explored, there exist a whole set of new
opportunities and research directions at the intersection of these two fields
which are relatively or even completely unexplored. One such example would be
the use of a set of robotic routers to set up a temporary communication path
between a sender and a receiver that uses the controlled mobility to the
advantage of packet routing. We find that there exist only a limited number of
articles to be directly categorized as RWSN related works whereas there exist a
range of articles in the robotics and the WSN literature that are also relevant
to this new field of research. To connect the dots, we first identify the core
problems and research trends related to RWSN such as connectivity,
localization, routing, and robust flow of information. Next, we classify the
existing research on RWSN as well as the relevant state-of-the-arts from
robotics and WSN community according to the problems and trends identified in
the first step. Lastly, we analyze what is missing in the existing literature,
and identify topics that require more research attention in the future
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Robotics and Military Operations
In the wake of two extended wars, Western militaries find themselves looking to the future while confronting amorphous nonstate threats and shrinking defense budgets. The 2015 Kingston Conference on International Security (KCIS) examined how robotics and autonomous systems that enhance soldier effectiveness may offer attractive investment opportunities for developing a more efficient force capable of operating effectively in the future environment. This monograph offers 3 chapters derived from the KCIS and explores the drivers influencing strategic choices associated with these technologies and offers preliminary policy recommendations geared to advance a comprehensive technology investment strategy. In addition, the publication offers insight into the ethical challenges and potential positive moral implications of using robots on the modern battlefield.https://press.armywarcollege.edu/monographs/1398/thumbnail.jp
Towards edge robotics: the progress from cloud-based robotic systems to intelligent and context-aware robotic services
Current robotic systems handle a different range of applications such as video surveillance, delivery
of goods, cleaning, material handling, assembly, painting, or pick and place services. These systems
have been embraced not only by the general population but also by the vertical industries to
help them in performing daily activities. Traditionally, the robotic systems have been deployed in
standalone robots that were exclusively dedicated to performing a specific task such as cleaning the
floor in indoor environments. In recent years, cloud providers started to offer their infrastructures
to robotic systems for offloading some of the robot’s functions. This ultimate form of the distributed
robotic system was first introduced 10 years ago as cloud robotics and nowadays a lot of robotic solutions
are appearing in this form. As a result, standalone robots became software-enhanced objects
with increased reconfigurability as well as decreased complexity and cost. Moreover, by offloading
the heavy processing from the robot to the cloud, it is easier to share services and information from
various robots or agents to achieve better cooperation and coordination.
Cloud robotics is suitable for human-scale responsive and delay-tolerant robotic functionalities
(e.g., monitoring, predictive maintenance). However, there is a whole set of real-time robotic applications
(e.g., remote control, motion planning, autonomous navigation) that can not be executed with
cloud robotics solutions, mainly because cloud facilities traditionally reside far away from the robots.
While the cloud providers can ensure certain performance in their infrastructure, very little can be
ensured in the network between the robots and the cloud, especially in the last hop where wireless
radio access networks are involved. Over the last years advances in edge computing, fog computing,
5G NR, network slicing, Network Function Virtualization (NFV), and network orchestration are stimulating
the interest of the industrial sector to satisfy the stringent and real-time requirements of their
applications. Robotic systems are a key piece in the industrial digital transformation and their benefits
are very well studied in the literature. However, designing and implementing a robotic system
that integrates all the emerging technologies and meets the connectivity requirements (e.g., latency,
reliability) is an ambitious task.
This thesis studies the integration of modern Information andCommunication Technologies (ICTs)
in robotic systems and proposes some robotic enhancements that tackle the real-time constraints of
robotic services. To evaluate the performance of the proposed enhancements, this thesis departs
from the design and prototype implementation of an edge native robotic system that embodies the concepts of edge computing, fog computing, orchestration, and virtualization. The proposed edge
robotics system serves to represent two exemplary robotic applications. In particular, autonomous
navigation of mobile robots and remote-control of robot manipulator where the end-to-end robotic
system is distributed between the robots and the edge server. The open-source prototype implementation
of the designed edge native robotic system resulted in the creation of two real-world testbeds
that are used in this thesis as a baseline scenario for the evaluation of new innovative solutions in
robotic systems.
After detailing the design and prototype implementation of the end-to-end edge native robotic
system, this thesis proposes several enhancements that can be offered to robotic systems by adapting
the concept of edge computing via the Multi-Access Edge Computing (MEC) framework. First, it
proposes exemplary network context-aware enhancements in which the real-time information about
robot connectivity and location can be used to dynamically adapt the end-to-end system behavior to
the actual status of the communication (e.g., radio channel). Three different exemplary context-aware
enhancements are proposed that aim to optimize the end-to-end edge native robotic system. Later,
the thesis studies the capability of the edge native robotic system to offer potential savings by means of
computation offloading for robot manipulators in different deployment configurations. Further, the
impact of different wireless channels (e.g., 5G, 4G andWi-Fi) to support the data exchange between a
robot manipulator and its remote controller are assessed.
In the following part of the thesis, the focus is set on how orchestration solutions can support
mobile robot systems to make high quality decisions. The application of OKpi as an orchestration algorithm
and DLT-based federation are studied to meet the KPIs that autonomously controlledmobile
robots have in order to provide uninterrupted connectivity over the radio access network. The elaborated
solutions present high compatibility with the designed edge robotics system where the robot
driving range is extended without any interruption of the end-to-end edge robotics service. While the
DLT-based federation extends the robot driving range by deploying access point extension on top of
external domain infrastructure, OKpi selects the most suitable access point and computing resource
in the cloud-to-thing continuum in order to fulfill the latency requirements of autonomously controlled
mobile robots.
To conclude the thesis the focus is set on how robotic systems can improve their performance by
leveraging Artificial Intelligence (AI) and Machine Learning (ML) algorithms to generate smart decisions.
To do so, the edge native robotic system is presented as a true embodiment of a Cyber-Physical
System (CPS) in Industry 4.0, showing the mission of AI in such concept. It presents the key enabling
technologies of the edge robotic system such as edge, fog, and 5G, where the physical processes are
integrated with computing and network domains. The role of AI in each technology domain is identified
by analyzing a set of AI agents at the application and infrastructure level. In the last part of the
thesis, the movement prediction is selected to study the feasibility of applying a forecast-based recovery
mechanism for real-time remote control of robotic manipulators (FoReCo) that uses ML to infer
lost commands caused by interference in the wireless channel. The obtained results are showcasing
the its potential in simulation and real-world experimentation.Programa de Doctorado en IngenierĂa Telemática por la Universidad Carlos III de MadridPresidente: Karl Holger.- Secretario: Joerg Widmer.- Vocal: Claudio Cicconett
Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera
The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV).
Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform.
Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future
Vision-Based Control of Unmanned Aerial Vehicles for Automated Structural Monitoring and Geo-Structural Analysis of Civil Infrastructure Systems
The emergence of wireless sensors capable of sensing, embedded computing, and wireless communication has provided an affordable means of monitoring large-scale civil infrastructure systems with ease. To date, the majority of the existing monitoring systems, including those based on wireless sensors, are stationary with measurement nodes installed without an intention for relocation later. Many monitoring applications involving structural and geotechnical systems require a high density of sensors to provide sufficient spatial resolution to their assessment of system performance. While wireless sensors have made high density monitoring systems possible, an alternative approach would be to empower the mobility of the sensors themselves to transform wireless sensor networks (WSNs) into mobile sensor networks (MSNs). In doing so, many benefits would be derived including reducing the total number of sensors needed while introducing the ability to learn from the data obtained to improve the location of sensors installed. One approach to achieving MSNs is to integrate the use of unmanned aerial vehicles (UAVs) into the monitoring application. UAV-based MSNs have the potential to transform current monitoring practices by improving the speed and quality of data collected while reducing overall system costs. The efforts of this study have been chiefly focused upon using autonomous UAVs to deploy, operate, and reconfigure MSNs in a fully autonomous manner for field monitoring of civil infrastructure systems.
This study aims to overcome two main challenges pertaining to UAV-enabled wireless monitoring: the need for high-precision localization methods for outdoor UAV navigation and facilitating modes of direct interaction between UAVs and their built or natural environments. A vision-aided UAV positioning algorithm is first introduced to augment traditional inertial sensing techniques to enhance the ability of UAVs to accurately localize themselves in a civil infrastructure system for placement of wireless sensors. Multi-resolution fiducial markers indicating sensor placement locations are applied to the surface of a structure, serving as navigation guides and precision landing targets for a UAV carrying a wireless sensor. Visual-inertial fusion is implemented via a discrete-time Kalman filter to further increase the robustness of the relative position estimation algorithm resulting in localization accuracies of 10 cm or smaller. The precision landing of UAVs that allows the MSN topology change is validated on a simple beam with the UAV-based MSN collecting ambient response data for extraction of global mode shapes of the structure. The work also explores the integration of a magnetic gripper with a UAV to drop defined weights from an elevation to provide a high energy seismic source for MSNs engaged in seismic monitoring applications. Leveraging tailored visual detection and precise position control techniques for UAVs, the work illustrates the ability of UAVs to—in a repeated and autonomous fashion—deploy wireless geophones and to introduce an impulsive seismic source for in situ shear wave velocity profiling using the spectral analysis of surface waves (SASW) method. The dispersion curve of the shear wave profile of the geotechnical system is shown nearly equal between the autonomous UAV-based MSN architecture and that taken by a traditional wired and manually operated SASW data collection system. The developments and proof-of-concept systems advanced in this study will extend the body of knowledge of robot-deployed MSN with the hope of extending the capabilities of monitoring systems while eradicating the need for human interventions in their design and use.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169980/1/zhh_1.pd
Mobile Robots
The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations
State-of-the-Art Sensors Technology in Spain 2015: Volume 1
This book provides a comprehensive overview of state-of-the-art sensors technology in specific leading areas. Industrial researchers, engineers and professionals can find information on the most advanced technologies and developments, together with data processing. Further research covers specific devices and technologies that capture and distribute data to be processed by applying dedicated techniques or procedures, which is where sensors play the most important role. The book provides insights and solutions for different problems covering a broad spectrum of possibilities, thanks to a set of applications and solutions based on sensory technologies. Topics include: • Signal analysis for spectral power • 3D precise measurements • Electromagnetic propagation • Drugs detection • e-health environments based on social sensor networks • Robots in wireless environments, navigation, teleoperation, object grasping, demining • Wireless sensor networks • Industrial IoT • Insights in smart cities • Voice recognition • FPGA interfaces • Flight mill device for measurements on insects • Optical systems: UV, LEDs, lasers, fiber optics • Machine vision • Power dissipation • Liquid level in fuel tanks • Parabolic solar tracker • Force sensors • Control for a twin roto
- …