85 research outputs found
Urban Drone Navigation: Autoencoder Learning Fusion for Aerodynamics
Drones are vital for urban emergency search and rescue (SAR) due to the
challenges of navigating dynamic environments with obstacles like buildings and
wind. This paper presents a method that combines multi-objective reinforcement
learning (MORL) with a convolutional autoencoder to improve drone navigation in
urban SAR. The approach uses MORL to achieve multiple goals and the autoencoder
for cost-effective wind simulations. By utilizing imagery data of urban
layouts, the drone can autonomously make navigation decisions, optimize paths,
and counteract wind effects without traditional sensors. Tested on a New York
City model, this method enhances drone SAR operations in complex urban
settings.Comment: 47 page
Drone Obstacle Avoidance and Navigation Using Artificial Intelligence
This thesis presents an implementation and integration of a robust obstacle avoidance and navigation module with ardupilot. It explores the problems in the current solution of obstacle avoidance and tries to mitigate it with a new design. With the recent innovation in artificial intelligence, it also explores opportunities to enable and improve the functionalities of obstacle avoidance and navigation using AI techniques. Understanding different types of sensors for both navigation and obstacle avoidance is required for the implementation of the design and a study of the same is presented as a background. A research on an autonomous car is done for better understanding autonomy and learning how it is solving the problem of obstacle avoidance and navigation. The implementation part of the thesis is focused on the design of a robust obstacle avoidance module and is tested with obstacle avoidance sensors such as Garmin lidar and Realsense r200. Image segmentation is used to verify the possibility of using the convolutional neural network for better understanding the nature of obstacles. Similarly, the end to end control with a single camera input using a deep neural network is used for verifying the possibility of using AI for navigation. In the end, a robust obstacle avoidance library is developed and tested both in the simulator and real drone. Image segmentation is implemented, deployed and tested. A possibility of an end to end control is also verified by obtaining a proof of concept
Multi-Robot Systems: Challenges, Trends and Applications
This book is a printed edition of the Special Issue entitled “Multi-Robot Systems: Challenges, Trends, and Applications” that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics
Hybrid Internal Model: Learning Agile Legged Locomotion with Simulated Robot Response
Robust locomotion control depends on accurate state estimations. However, the
sensors of most legged robots can only provide partial and noisy observations,
making the estimation particularly challenging, especially for external states
like terrain frictions and elevation maps. Inspired by the classical Internal
Model Control principle, we consider these external states as disturbances and
introduce Hybrid Internal Model (HIM) to estimate them according to the
response of the robot. The response, which we refer to as the hybrid internal
embedding, contains the robot's explicit velocity and implicit stability
representation, corresponding to two primary goals for locomotion tasks:
explicitly tracking velocity and implicitly maintaining stability. We use
contrastive learning to optimize the embedding to be close to the robot's
successor state, in which the response is naturally embedded. HIM has several
appealing benefits: It only needs the robot's proprioceptions, i.e., those from
joint encoders and IMU as observations. It innovatively maintains consistent
observations between simulation reference and reality that avoids information
loss in mimicking learning. It exploits batch-level information that is more
robust to noises and keeps better sample efficiency. It only requires 1 hour of
training on an RTX 4090 to enable a quadruped robot to traverse any terrain
under any disturbances. A wealth of real-world experiments demonstrates its
agility, even in high-difficulty tasks and cases never occurred during the
training process, revealing remarkable open-world generalizability.Comment: Use 1 hour to train a quadruped robot capable of traversing any
terrain under any disturbances in the open world, Project Page:
https://github.com/OpenRobotLab/HIMLoc
Core Challenges in Embodied Vision-Language Planning
Recent advances in the areas of multimodal machine learning and artificial
intelligence (AI) have led to the development of challenging tasks at the
intersection of Computer Vision, Natural Language Processing, and Embodied AI.
Whereas many approaches and previous survey pursuits have characterised one or
two of these dimensions, there has not been a holistic analysis at the center
of all three. Moreover, even when combinations of these topics are considered,
more focus is placed on describing, e.g., current architectural methods, as
opposed to also illustrating high-level challenges and opportunities for the
field. In this survey paper, we discuss Embodied Vision-Language Planning
(EVLP) tasks, a family of prominent embodied navigation and manipulation
problems that jointly use computer vision and natural language. We propose a
taxonomy to unify these tasks and provide an in-depth analysis and comparison
of the new and current algorithmic approaches, metrics, simulated environments,
as well as the datasets used for EVLP tasks. Finally, we present the core
challenges that we believe new EVLP works should seek to address, and we
advocate for task construction that enables model generalizability and furthers
real-world deployment.Comment: 35 page
An overview of robotics and autonomous systems for harsh environments
Across a wide range of industries and applications, robotics and autonomous systems can fulfil the crucial and challenging tasks such as inspection, exploration, monitoring, drilling, sampling and mapping in areas of scientific discovery, disaster prevention, human rescue and infrastructure management, etc. However, in many situations, the associated environment is either too dangerous or inaccessible to humans. Hence, a wide range of robots have been developed and deployed to replace or aid humans in these activities. A look at these harsh environment applications of robotics demonstrate the diversity of technologies developed. This paper reviews some key application areas of robotics that involve interactions with harsh environments (such as search and rescue, space exploration, and deep-sea operations), gives an overview of the developed technologies and provides a discussion of the key trends and future directions common to many of these areas
- …