537 research outputs found
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Development of Multi-Robotic Arm System for Sorting System Using Computer Vision
This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line
An Approach for Multi-Robot Opportunistic Coexistence in Shared Space
This thesis considers a situation in which multiple robots operate in the
same environment towards the achievement of different tasks. In this situation,
please consider that not only the tasks, but also the robots themselves
are likely be heterogeneous, i.e., different from each other in their
morphology, dynamics, sensors, capabilities, etc. As an example, think
about a "smart hotel": small wheeled robots are likely to be devoted to
cleaning floors, whereas a humanoid robot may be devoted to social interaction,
e.g., welcoming guests and providing relevant information to
them upon request.
Under these conditions, robots are required not only to co-exist, but also
to coordinate their activity if we want them to exhibit a coherent and
effective behavior: this may range from mutual avoidance to avoid collisions,
to a more explicit coordinated behavior, e.g., task assignment or
cooperative localization.
The issues above have been deeply investigated in the Literature. Among
the topics that may play a crucial role to design a successful system, this
thesis focuses on the following ones:
(i) An integrated approach for path following and obstacle avoidance is
applied to unicycle type robots, by extending an existing algorithm [1]
initially developed for the single robot case to the multi-robot domain.
The approach is based on the definition of the path to be followed as a
curve f (x;y) in space, while obstacles are modeled as Gaussian functions
that modify the original function, generating a resulting safe path. The
attractiveness of this methodology which makes it look very simple, is
that it neither requires the computation of a projection of the robot position
on the path, nor does it need to consider a moving virtual target
to be tracked. The performance of the proposed approach is analyzed
by means of a series of experiments performed in dynamic environments
with unicycle-type robots by integrating and determining the position of
robot using odometry and in Motion capturing environment.
(ii) We investigate the problem of multi-robot cooperative localization
in dynamic environments. Specifically, we propose an approach where
wheeled robots are localized using the monocular camera embedded in
the head of a Pepper humanoid robot, to the end of minimizing deviations
from their paths and avoiding each other during navigation tasks.
Indeed, position estimation requires obtaining a linear relationship between
points in the image and points in the world frame: to this end, an
Inverse Perspective mapping (IPM) approach has been adopted to transform
the acquired image into a bird eye view of the environment. The
scenario is made more complex by the fact that Pepper\u2019s head is moving
dynamically while tracking the wheeled robots, which requires to consider
a different IPM transformation matrix whenever the attitude (Pitch
and Yaw) of the camera changes. Finally, the IPM position estimate returned
by Pepper is merged with the estimate returned by the odometry
of the wheeled robots through an Extened Kalman Filter. Experiments
are shown with multiple robots moving along different paths in a shared
space, by avoiding each other without onboard sensors, i.e., by relying
only on mutual positioning information.
Software for implementing the theoretical models described above have
been developed in ROS, and validated by performing real experiments
with two types of robots, namely: (i) a unicycle wheeled Roomba robot(commercially available all over the world), (ii) Pepper Humanoid robot
(commercially available in Japan and B2B model in Europe)
Learning and Searching Methods for Robust, Real-Time Visual Odometry.
Accurate position estimation provides a critical foundation for mobile robot perception and control. While well-studied, it remains difficult to provide timely, precise, and robust position estimates for applications that operate in uncontrolled environments, such as robotic exploration and autonomous driving. Continuous, high-rate egomotion estimation is possible using cameras and Visual Odometry (VO), which tracks the movement of sparse scene content known as image keypoints or features. However, high update rates, often 30~Hz or greater, leave little computation time per frame, while variability in scene content stresses robustness. Due to these challenges, implementing an accurate and robust visual odometry system remains difficult.
This thesis investigates fundamental improvements throughout all stages of a visual odometry system, and has three primary contributions: The first contribution is a machine learning method for feature detector design. This method considers end-to-end motion estimation accuracy during learning. Consequently, accuracy and robustness are improved across multiple challenging datasets in comparison to state of the art alternatives. The second contribution is a proposed feature descriptor, TailoredBRIEF, that builds upon recent advances in the field in fast, low-memory descriptor extraction and matching. TailoredBRIEF is an in-situ descriptor learning method that improves feature matching accuracy by efficiently customizing descriptor structures on a per-feature basis. Further, a common asymmetry in vision system design between reference and query images is described and exploited, enabling approaches that would otherwise exceed runtime constraints. The final contribution is a new algorithm for visual motion estimation: Perspective Alignment Search~(PAS). Many vision systems depend on the unique appearance of features during matching, despite a large quantity of non-unique features in otherwise barren environments. A search-based method, PAS, is proposed to employ features that lack unique appearance through descriptorless matching. This method simplifies visual odometry pipelines, defining one method that subsumes feature matching, outlier rejection, and motion estimation.
Throughout this work, evaluations of the proposed methods and systems are carried out on ground-truth datasets, often generated with custom experimental platforms in challenging environments. Particular focus is placed on preserving runtimes compatible with real-time operation, as is necessary for deployment in the field.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113365/1/chardson_1.pd
Model-Based Environmental Visual Perception for Humanoid Robots
The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
Real-time trajectory generation for dynamic systems with nonholonomic constraints using Player/Stage and NTG.
This thesis will present various methods of trajectory generation for various types of mobile robots. Then it will progress to evaluating Robot Operating Systems (ROS’s) that can be used to control and simulate mobile robots, and it will explain why Player/Stage was chosen as the ROS for this thesis. It will then discuss Nonlinear Trajectory Generation as the main method for producing a path for mobile robots with dynamic and kinematic constraints. Finally, it will combine Player, Stage, and NTG into a system that produces a trajectory in real-time for a mobile robot and simulates a differential drive robot being driven from the initial state to the goal state in the presence of obstacles. Experiments will include the following: Blobfinding for physical and simulated camera systems, position control of physical and simulated differential drive robots, wall following using simulated range sensors, trajectory generation for omnidirectional and differential drive robots, and a combination of blobfinding, position control, and trajectory generation. Each experiment was a success, to varying degrees. The culmination of the thesis will present a real-time trajectory generation and position control method for a differential drive robot in the presence of obstacles
Recommended from our members
Improving the safety and efficiency of rail yard operations using robotics
textSignificant efforts have been expended by the railroad industry to make operations safer and more efficient through the intelligent use of sensor data. This work proposes to take the technology one step further to use this data for the control of physical systems designed to automate hazardous railroad operations, particularly those that require humans to interact with moving trains. To accomplish this, application specific requirements must be established to design self-contained machine vision and robotic solutions to eliminate the risks associated with existing manual operations. Present-day rail yard operations have been identified as good candidates to begin development. Manual uncoupling, in particular, of rolling stock in classification yards has been investigated. To automate this process, an intelligent robotic system must be able to detect, track, approach, contact, and manipulate constrained objects on equipment in motion. This work presents multiple prototypes capable of autonomously uncoupling full-scale freight cars using feedback from its surrounding environment. Geometric image processing algorithms and machine learning techniques were implemented to accurately identify cylindrical objects in point clouds generated in real-vi time. Unique methods fusing velocity and vision data were developed to synchronize a pair of moving rigid bodies in real-time. Multiple custom end-effectors with in-built compliance and fault tolerance were designed, fabricated, and tested for grasping and manipulating cylindrical objects. Finally, an event-driven robotic control application was developed to safely and reliably uncouple freight cars using data from 3D cameras, velocity sensors, force/torque transducers, and intelligent end-effector tooling. Experimental results in a lab setting confirm that modern robotic and sensing hardware can be used to reliably separate pairs of rolling stock up to two miles per hour. Additionally, subcomponents of the autonomous pin-pulling system (APPS) were designed to be modular to the point where they could be used to automate other hazardous, labor-intensive tasks found in U.S. classification yards. Overall, this work supports the deployment of autonomous robotic systems in semi-unstructured yard environments to increase the safety and efficiency of rail operations.Mechanical Engineerin
- …