194 research outputs found
A bank of unscented Kalman filters for multimodal human perception with mobile service robots
A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints.
In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot.
Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics
Sensor Network Based Collision-Free Navigation and Map Building for Mobile Robots
Safe robot navigation is a fundamental research field for autonomous robots
including ground mobile robots and flying robots. The primary objective of a
safe robot navigation algorithm is to guide an autonomous robot from its
initial position to a target or along a desired path with obstacle avoidance.
With the development of information technology and sensor technology, the
implementations combining robotics with sensor network are focused on in the
recent researches. One of the relevant implementations is the sensor network
based robot navigation. Moreover, another important navigation problem of
robotics is safe area search and map building. In this report, a global
collision-free path planning algorithm for ground mobile robots in dynamic
environments is presented firstly. Considering the advantages of sensor
network, the presented path planning algorithm is developed to a sensor network
based navigation algorithm for ground mobile robots. The 2D range finder sensor
network is used in the presented method to detect static and dynamic obstacles.
The sensor network can guide each ground mobile robot in the detected safe area
to the target. Furthermore, the presented navigation algorithm is extended into
3D environments. With the measurements of the sensor network, any flying robot
in the workspace is navigated by the presented algorithm from the initial
position to the target. Moreover, in this report, another navigation problem,
safe area search and map building for ground mobile robot, is studied and two
algorithms are presented. In the first presented method, we consider a ground
mobile robot equipped with a 2D range finder sensor searching a bounded 2D area
without any collision and building a complete 2D map of the area. Furthermore,
the first presented map building algorithm is extended to another algorithm for
3D map building
Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments.
This thesis investigates some of the sensing and perception challenges faced
by multi-robot teams equipped with LIDAR and camera
sensors. Multi-robot teams are ideal for deployment in large,
real-world environments due to their ability to parallelize exploration,
reconnaissance or mapping tasks.
However, such domains also impose additional requirements, including the
need for a) online algorithms (to eliminate stopping and waiting for
processing to finish before proceeding) and b) scalability (to handle
data from many robots distributed over a large area).
These general requirements give rise to specific algorithmic challenges, including 1) online maintenance of large, coherent
maps covering the explored area, 2) online estimation of communication properties
in the presence of buildings and other interfering structure, and 3)
online fusion and segmentation of multiple sensors to aid in object detection.
The contribution of this thesis is the introduction of novel
approaches that leverage grid-maps and sparse multi-variate gaussian
inference to augment the capability of multi-robot teams operating in
urban, indoor-outdoor environments by improving the state of the art
of map rasterization, signal strength prediction, colored point cloud
segmentation, and reliable camera calibration.
In particular, we introduce a map rasterization technique for large
LIDAR-based occupancy grids that makes online updates possible when
data is arriving from many robots at once. We also introduce new
online techniques for robots to predict the signal strength to their
teammates by combining LIDAR measurements with signal strength
measurements from their radios. Processing fused LIDAR+camera point
clouds is also important for many object-detection pipelines. We
demonstrate a near linear-time online segmentation algorithm to this
domain. However, maintaining the calibration of a fleet of 14 robots
made this approach difficult to employ in practice.
Therefore we introduced a robust and repeatable
camera calibration process that grounds the camera model uncertainty in pixel
error, allowing the system to guide novices and experts alike to reliably produce accurate calibrations.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113516/1/jhstrom_1.pd
Autonomous navigation for guide following in crowded indoor environments
The requirements for assisted living are rapidly changing as the number of elderly
patients over the age of 60 continues to increase. This rise places a high level of stress on
nurse practitioners who must care for more patients than they are capable. As this trend is
expected to continue, new technology will be required to help care for patients. Mobile
robots present an opportunity to help alleviate the stress on nurse practitioners by
monitoring and performing remedial tasks for elderly patients. In order to produce
mobile robots with the ability to perform these tasks, however, many challenges must be
overcome.
The hospital environment requires a high level of safety to prevent patient injury. Any
facility that uses mobile robots, therefore, must be able to ensure that no harm will come
to patients whilst in a care environment. This requires the robot to build a high level of
understanding about the environment and the people with close proximity to the robot.
Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders.
3D time-of-flight sensors have recently been introduced and provide dense 3D point
clouds of the environment at real-time frame rates. This provides mobile robots with
previously unavailable dense information in real-time. I investigate the use of time-of-flight
cameras for mobile robot navigation in crowded environments in this thesis. A
unified framework to allow the robot to follow a guide through an indoor environment
safely and efficiently is presented. Each component of the framework is analyzed in
detail, with real-world scenarios illustrating its practical use.
Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems
that must be overcome to receive consistent and accurate data. I propose a novel and
practical probabilistic framework to overcome many of the inherent problems in this
thesis. The framework fuses multiple depth maps with color information forming a
reliable and consistent view of the world. In order for the robot to interact with the
environment, contextual information is required. To this end, I propose a region-growing
segmentation algorithm to group points based on surface characteristics, surface normal
and surface curvature. The segmentation process creates a distinct set of surfaces,
however, only a limited amount of contextual information is available to allow for
interaction. Therefore, a novel classifier is proposed using spherical harmonics to
differentiate people from all other objects.
The added ability to identify people allows the robot to find potential candidates to
follow. However, for safe navigation, the robot must continuously track all visible
objects to obtain positional and velocity information. A multi-object tracking system is
investigated to track visible objects reliably using multiple cues, shape and color. The
tracking system allows the robot to react to the dynamic nature of people by building an
estimate of the motion flow. This flow provides the robot with the necessary information
to determine where and at what speeds it is safe to drive. In addition, a novel search
strategy is proposed to allow the robot to recover a guide who has left the field-of-view.
To achieve this, a search map is constructed with areas of the environment ranked
according to how likely they are to reveal the guide’s true location. Then, the robot can
approach the most likely search area to recover the guide. Finally, all components
presented are joined to follow a guide through an indoor environment. The results
achieved demonstrate the efficacy of the proposed components
Using a mobile robot for hazardous substances detection in a factory environment
Dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustries that work with toxic materials need extensive security protocols to avoid accidents.
Instead of having fixed sensors, the concept of assembling the sensors on a mobile
robot that performs the scanning through a defined path is cheaper, configurable and
adaptable. This work describes a mobile robot, equipped with several gas sensors and
LIDAR, that follows a trajectory based on waypoints, simulating a working Autonomous
Guided Vehicle (AGV). At the same time, the robot keeps measuring for toxic gases. In
other words, the robot follows the trajectory while the gas concentration is under a defined
value. Otherwise, it starts the autonomous leakage search based on a search algorithm
that allows to find the leakage position avoiding obstacles in real time. The proposed
methodology is verified in simulation based on a model of the real robot. Therefore, three
path plannings were developed and their performance compared. A Light Detection And
Ranging (LIDAR) device was integrated with the path planning to propose an obstacle
avoidance system with a dilation technique to enlarge the obstacles, thus, considering the
robot’s dimensions. Moreover, if needed, the robot can be remotely operated with visual
feedback. In addition, a controller was made for the robot. Gas sensors were embedded in
the robot with Finite Impulse Response (FIR) filter to process the data. A low cost AGV
was developed to compete in Festival Nacional de Robótica (Portuguese Robotics Open)
2019 - Gondomar, describing the robot’s control and software solution to the competition.As indústrias que trabalham com materiais tóxicos necessitam de extensos protocolos
de segurança para evitar acidentes. Ao invés de ter sensores estáticos, o conceito de
instalar sensores em um robô móvel que inspeciona através de um caminho definido é mais
barato, configurável e adaptável. O presente trabalho descreve um robô móvel, equipado
com vários sensores de gás e LIDAR, que percorre uma trajetória baseada em pontos
de controle, simulando um AGV em trabalho. Em simultâneo são efetuadas medidas de
gases tóxicos. Em outras palavras, o robô segue uma trajetória enquanto a concentração
de gás está abaixo de um valor definido. Caso contrário, inicia uma busca autônoma
de vazamento de gás com um algoritmo de busca que permite achar a posição do gás
evitando os obstáculos em tempo real. A metodologia proposta é verificada em simulação.
Três algoritmos de planejamento de caminho foram desenvolvidos e suas performances
comparadas. Um LIDAR foi integrado com o planejamento de caminho para propôr
um sistema de evitar obstáculos. Além disso, o robô pode ser operado remotamente com
auxílio visual. Foi feito um controlador para o robô. Sensores de gás foram embarcados no
robô com um filtro de resposta ao impulso finita para processar as informações. Um veículo
guiado automático de baixo custo foi desenvolvido para competir no Festival Nacional de
Robótica 2019 - Gondomar. O controle do veículo foi descrito com o programa de solução
para a competição
Autonomous navigation of a wheeled mobile robot in farm settings
This research is mainly about autonomously navigation of an agricultural wheeled mobile robot in an unstructured outdoor setting. This project has four distinct phases defined as: (i) Navigation and control of a wheeled mobile robot for a point-to-point motion. (ii) Navigation and control of a wheeled mobile robot in following a given path (path following problem). (iii) Navigation and control of a mobile robot, keeping a constant proximity distance with the given paths or plant rows (proximity-following). (iv) Navigation of the mobile robot in rut following in farm fields. A rut is a long deep track formed by the repeated passage of wheeled vehicles in soft terrains such as mud, sand, and snow.
To develop reliable navigation approaches to fulfill each part of this project, three main steps are accomplished: literature review, modeling and computer simulation of wheeled mobile robots, and actual experimental tests in outdoor settings. First, point-to-point motion planning of a mobile robot is studied; a fuzzy-logic based (FLB) approach is proposed for real-time autonomous path planning of the robot in unstructured environment. Simulation and experimental evaluations shows that FLB approach is able to cope with different dynamic and unforeseen situations by tuning a safety margin. Comparison of FLB results with vector field histogram (VFH) and preference-based fuzzy (PBF) approaches, reveals that FLB approach produces shorter and smoother paths toward the goal in almost all of the test cases examined. Then, a novel human-inspired method (HIM) is introduced. HIM is inspired by human behavior in navigation from one point to a specified goal point. A human-like reasoning ability about the situations to reach a predefined goal point while avoiding any static, moving and unforeseen obstacles are given to the robot by HIM. Comparison of HIM results with FLB suggests that HIM is more efficient and effective than FLB.
Afterward, navigation strategies are built up for path following, rut following, and proximity-following control of a wheeled mobile robot in outdoor (farm) settings and off-road terrains. The proposed system is composed of different modules which are: sensor data analysis, obstacle detection, obstacle avoidance, goal seeking, and path tracking. The capabilities of the proposed navigation strategies are evaluated in variety of field experiments; the results show that the proposed approach is able to detect and follow rows of bushes robustly. This action is used for spraying plant rows in farm field.
Finally, obstacle detection and obstacle avoidance modules are developed in navigation system. These modules enables the robot to detect holes or ground depressions (negative obstacles), that are inherent parts of farm settings, and also over ground level obstacles (positive obstacles) in real-time at a safe distance from the robot. Experimental tests are carried out on two mobile robots (PowerBot and Grizzly) in outdoor and real farm fields. Grizzly utilizes a 3D-laser range-finder to detect objects and perceive the environment, and a RTK-DGPS unit for localization. PowerBot uses sonar sensors and a laser range-finder for obstacle detection. The experiments demonstrate the capability of the proposed technique in successfully detecting and avoiding different types of obstacles both positive and negative in variety of scenarios
Developing a person guidance module for hospital robots
This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance
Appearance and Geometry Assisted Visual Navigation in Urban Areas
Navigation is a fundamental task for mobile robots in applications such as exploration, surveillance, and search and rescue. The task involves solving the simultaneous localization and mapping (SLAM) problem, where a map of the environment is constructed. In order for this map to be useful for a given application, a suitable scene representation needs to be defined that allows spatial information sharing between robots and also between humans and robots. High-level scene representations have the benefit of being more robust and having higher exchangeability for interpretation. With the aim of higher level scene representation, in this work we explore high-level landmarks and their usage using geometric and appearance information to assist mobile robot navigation in urban areas.
In visual SLAM, image registration is a key problem. While feature-based methods such as scale-invariant feature transform (SIFT) matching are popular, they do not utilize appearance information as a whole and will suffer from low-resolution images. We study appearance-based methods and propose a scale-space integrated Lucas-Kanade’s method that can estimate geometric transformations and also take into account image appearance with different resolutions. We compare our method against state-of-the-art methods and show that our method can register images efficiently with high accuracy.
In urban areas, planar building facades (PBFs) are basic components of the quasirectilinear environment. Hence, segmentation and mapping of PBFs can increase a robot’s abilities of scene understanding and localization. We propose a vision-based PBF segmentation and mapping technique that combines both appearance and geometric constraints to segment out planar regions. Then, geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints are used in an optimization process to improve the mapping of PBFs.
A major issue in monocular visual SLAM is scale drift. While depth sensors, such as lidar, are free from scale drift, this type of sensors are usually more expensive compared to cameras. To enable low-cost mobile robots equipped with monocular cameras to obtain accurate position information, we use a 2D lidar map to rectify imprecise visual SLAM results using planar structures. We propose a two-step optimization approach assisted by a penalty function to improve on low-quality local minima results.
Robot paths for navigation can be either automatically generated by a motion planning algorithm or provided by a human. In both cases, a scene representation of the environment, i.e., a map, is useful to specify meaningful tasks for the robot. However, SLAM results usually produce a sparse scene representation that consists of low-level landmarks, such as point clouds, which are neither convenient nor intuitive to use for task specification. We present a system that allows users to program mobile robots using high-level landmarks from appearance data
Application of a mobile robot to spatial mapping of radioactive substances in indoor environment
Nuclear medicine requires the use of radioactive substances that can contaminate critical
areas (dangerous or hazardous) where the presence of a human must be reduced or avoided.
The present work uses a mobile robot in real environment and 3D simulation to develop
a method to realize spatial mapping of radioactive substances. The robot should visit all
the waypoints arranged in a grid of connectivity that represents the environment. The
work presents the methodology to perform the path planning, control and estimation
of the robot location. For path planning two methods are approached, one a heuristic
method based on observation of problem and another one was carried out an adaptation
in the operations of the genetic algorithm. The control of the actuators was based on two
methodologies, being the first to follow points and the second to follow trajectories. To
locate the real mobile robot, the extended Kalman filter was used to fuse an ultra-wide
band sensor with odometry, thus estimating the position and orientation of the mobile
agent. The validation of the obtained results occurred using a low cost system with a
laser range finder.A medicina nuclear requer o uso de substâncias radioativas que pode vir a contaminar
áreas críticas, onde a presença de um ser humano deve ser reduzida ou evitada. O presente
trabalho utiliza um robô móvel em ambiente real e em simulação 3D para desenvolver um
método para o mapeamento espacial de substâncias radioativas. O robô deve visitar todos
os waypoinst dispostos em uma grelha de conectividade que representa o ambiente. O trabalho
apresenta a metodologia para realizar o planejamento de rota, controle e estimação
da localização do robô. Para o planejamento de rota são abordados dois métodos, um
baseado na heurística ao observar o problema e ou outro foi realizado uma adaptação nas
operações do algoritmo genético. O controle dos atuadores foi baseado em duas metodologias,
sendo a primeira para seguir de pontos e a segunda seguir trajetórias. Para localizar
o robô móvel real foi utilizado o filtro de Kalman extendido para a fusão entre um sensor
ultra-wide band e odometria, estimando assim a posição e orientação do agente móvel. A
validação dos resultados obtidos ocorreu utilizando um sistema de baixo custo com um
laser range finder
Virtual reality based multi-modal teleoperation using mixed autonomy
The thesis presents a multi modal teleoperation interface featuring an integrated virtual reality based simulation aumented by sensors and image processing capabilities onboard the remotely operated vehicle. The virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle\u27s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between the human operator and on board vehicle intelligence. The reliability results of individual components along with overall system implementation and the results of the user study helps show that the VR based multi modal teleoperation interface is more adaptable and intuitive when compared to other interfaces
- …