46 research outputs found
Robot Localization Using Visual Image Mapping
One critical step in providing the Air Force the capability to explore unknown environments is for an autonomous agent to be able to determine its location. The calculation of the robot\u27s pose is an optimization problem making use of the robot\u27s internal navigation sensors and data fusion of range sensor readings to find the most likely pose. This data fusion process requires the simultaneous generation of a map which the autonomous vehicle can then use to avoid obstacles, communicate with other agents in the same environment, and locate targets. Our solution entails mounting a Class 1 laser to an ERS-7 AIBO. The laser projects a horizontal line on obstacles in the AIBO camera\u27s field of view. Range readings are determined by capturing and processing multiple image frames, resolving the laser line to the horizon, and extract distance information to each obstacle. This range data is then used in conjunction with mapping a localization software to accurately navigate the AIBO
Methods for the automatic alignment of colour histograms
Colour provides important information in many image processing tasks such as object identification and
tracking. Different images of the same object frequently yield different colour values due to undesired
variations in lighting and the camera. In practice, controlling the source of these fluctuations is difficult,
uneconomical or even impossible in a particular imaging environment. This thesis is concerned with the
question of how to best align the corresponding clusters of colour histograms to reduce or remove the
effect of these undesired variations.
We introduce feature based histogram alignment (FBHA) algorithms that enable flexible alignment
transformations to be applied. The FBHA approach has three steps, 1) feature detection in the colour
histograms, 2) feature association and 3) feature alignment. We investigate the choices for these three
steps on two colour databases : 1) a structured and labeled database of RGB imagery acquired under controlled
camera, lighting and object variation and 2) grey-level video streams from an industrial inspection
application. The design and acquisition of the RGB image and grey-level video databases are a key contribution
of the thesis. The databases are used to quantitatively compare the FBHA approach against
existing methodologies and show it to be effective. FBHA is intended to provide a generic method for
aligning colour histograms, it only uses information from the histograms and therefore ignores spatial
information in the image. Spatial information and other context sensitive cues are deliberately avoided
to maintain the generic nature of the algorithm; by ignoring some of this important information we gain
useful insights into the performance limits of a colour alignment algorithm that works from the colour
histogram alone, this helps understand the limits of a generic approach to colour alignment
World Robot Challenge 2020 -- Partner Robot: A Data-Driven Approach for Room Tidying with Mobile Manipulator
Tidying up a household environment using a mobile manipulator poses various
challenges in robotics, such as adaptation to large real-world environmental
variations, and safe and robust deployment in the presence of humans.The
Partner Robot Challenge in World Robot Challenge (WRC) 2020, a global
competition held in September 2021, benchmarked tidying tasks in the real home
environments, and importantly, tested for full system performances.For this
challenge, we developed an entire household service robot system, which
leverages a data-driven approach to adapt to numerous edge cases that occur
during the execution, instead of classical manual pre-programmed solutions. In
this paper, we describe the core ingredients of the proposed robot system,
including visual recognition, object manipulation, and motion planning. Our
robot system won the second prize, verifying the effectiveness and potential of
data-driven robot systems for mobile manipulation in home environments
Omnidirectional Stereo Vision for Autonomous Vehicles
Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360Ā° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications