9,273 research outputs found

    Multisensor data fusion for joint people tracking and identification with a service robot

    Get PDF
    Tracking and recognizing people are essential skills modern service robots have to be provided with. The two tasks are generally performed independently, using ad-hoc solutions that first estimate the location of humans and then proceed with their identification. The solution presented in this paper, instead, is a general framework for tracking and recognizing people simultaneously with a mobile robot, where the estimates of the human location and identity are fused using probabilistic techniques. Our approach takes inspiration from recent implementations of joint tracking and classification, where the considered targets are mainly vehicles and aircrafts in military and civilian applications. We illustrate how people can be robustly tracked and recognized with a service robot using an improved histogram-based detection and multisensor data fusion. Some experiments in real challenging scenarios show the good performance of our solution

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion

    Get PDF
    Agricultural mobile robots have great potential to effectively implement different agricultural tasks. They can save human labour costs, avoid the need for people having to perform risky operations and increase productivity. Automation and advanced sensing technologies can provide up-to-date information that helps farmers in orchard management. Data collected from on-board sensors on a mobile robot provide information that can help the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development. In orchards, trees are natural landmarks providing suitable cues for mobile robot localisation and navigation as trees are nominally planted in straight and parallel rows. This thesis presents a novel tree trunk detection algorithm that detects trees and discriminates between trees and non-tree objects in the orchard using a camera and 2D laser scanner data fusion. A local orchard map of the individual trees was developed allowing the mobile robot to navigate to a specific tree in the orchard to perform a specific task such as tree inspection. Furthermore, this thesis presents a localisation algorithm that does not rely on GPS positions and depends only on the on-board sensors of the mobile robot without adding any artificial landmarks, respective tapes or tags to the trees. The novel tree trunk detection algorithm combined the features extracted from a low cost camera's images and 2D laser scanner data to increase the robustness of the detection. The developed algorithm used a new method to detect the edge points and determine the width of the tree trunks and non-tree objects from the laser scan data. Then a projection of the edge points from the laser scanner coordinates to the image plane was implemented to construct a region of interest with the required features for tree trunk colour and edge detection. The camera images were used to verify the colour and the parallel edges of the tree trunks and non-tree objects. The algorithm automatically adjusted the colour detection parameters after each test which was shown to increase the detection accuracy. The orchard map was constructed based on tree trunk detection and consisted of the 2D positions of the individual trees and non-tree objects. The map of the individual trees was used as an a priority map for mobile robot localisation. A data fusion algorithm based on an Extended Kalman filter was used for pose estimation of the mobile robot in different paths (midway between rows, close to the rows and moving around trees in the row) and different turns (semi-circle and right angle turns) required for tree inspection tasks. The 2D positions of the individual trees were used in the correction step of the Extended Kalman filter to enhance localisation accuracy. Experimental tests were conducted in a simulated environment and a real orchard to evaluate the performance of the developed algorithms. The tree trunk detection algorithm was evaluated under two broad illumination conditions (sunny and cloudy). The algorithm was able to detect the tree trunks (regular and thin tree trunks) and discriminate between trees and non-tree objects with a detection accuracy of 97% showing that the fusion of both vision and 2D laser scanner technologies produced robust tree trunk detection. The mapping method successfully localised all the trees and non-tree objects of the tested tree rows in the orchard environment. The mapping results indicated that the constructed map can be reliably used for mobile robot localisation and navigation. The localisation algorithm was evaluated against the logged RTK-GPS positions for different paths and headland turns. The average of the RMS of the position error in x, y coordinates and Euclidean distance were 0.08 m, 0.07 m and 0.103 m respectively, whilst the average of the RMS of the heading error was 3:32Ā°. These results were considered acceptable while driving along the rows and when executing headland turns for the target application of autonomous mobile robot navigation and tree inspection tasks in orchards

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    Computationally efficient solutions for tracking people with a mobile robot: an experimental evaluation of Bayesian filters

    Get PDF
    Modern service robots will soon become an essential part of modern society. As they have to move and act in human environments, it is essential for them to be provided with a fast and reliable tracking system that localizes people in the neighbourhood. It is therefore important to select the most appropriate filter to estimate the position of these persons. This paper presents three efficient implementations of multisensor-human tracking based on different Bayesian estimators: Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Sampling Importance Resampling (SIR) particle filter. The system implemented on a mobile robot is explained, introducing the methods used to detect and estimate the position of multiple people. Then, the solutions based on the three filters are discussed in detail. Several real experiments are conducted to evaluate their performance, which is compared in terms of accuracy, robustness and execution time of the estimation. The results show that a solution based on the UKF can perform as good as particle filters and can be often a better choice when computational efficiency is a key issue

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings in cognitive psychology, our model is composed of layers representing maps at diļ¬€erent levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground
    • ā€¦
    corecore