36,810 research outputs found

    Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots

    Get PDF
    A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras

    An Intelligent Human-Tracking Robot Based-on Kinect Sensor

    Get PDF
    This thesis provides an indoor human-tracking robot, which is also able to control other electrical devices for the user. The overall experimental setup consists of a skid-steered mobile robot, Kinect sensor, laptop, wide-angle camera and two lamps. The Kinect sensor is mounted on the mobile robot to collect position and skeleton data of the user in real time and sends it to the laptop. The laptop processes these data and then sends commands to the robot and the lamps. The wide-angle camera is mounted on the ceiling to verify the tracking performance of the Kinect sensor. A C++ program runs the camera, and a java program is used to process the data from the C++ program and the Kinect sensor and then sends the commands to the robot and the lamps. The human-tracking capability is realized by two decoupled feedback controllers for linear and rotational motions. Experimental results show that although there are small delays (0.5 s for linear motion and 1.5 s for rotational motion) and steady-state errors (0.1 m for linear motion and 1.5° for rotational motion), tests show that they are acceptable since the delays and errors do not cause the tracking distance or angle out of the desirable range (±0.05m and ± 10° of the reference input) and the tracking algorithm is robust. There are four gestures designed for the user to control the robot, two switch-mode gestures, lamp crate gesture, and lamp-selection and color change gesture. Success rates of gestures recognition are more than 90% within the detectable range of the Kinect sensor

    Multisensor-based human detection and tracking for mobile service robots

    Get PDF
    The one of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In the present paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based legs detection using the on-board LRF. The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to be very discriminative also in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera and the information is fused to the legs position using a sequential implementation of Unscented Kalman Filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments

    Real-Time Visual Servo Control of Two-Link and Three DOF Robot Manipulator

    Get PDF
    This project presents experimental results of position-based visual servoing control process of a 3R robot using 2 fixed cameras. Visual servoing concerns several field of research including vision systems, robotics and automatic control. This method deal with real time changes in the relative position of the target-object with respect to robot. It is have good accuracy with independency of Manipulator servo control structure from the target pose coordinates are the additional advantages of this method. The applications of visually guided systems are many: from intelligent homes to automotive industry. Visual servoing are also useful for a wide range of applications and it can be used to control many different systems (manipulator arms, mobile robots, aircraft, etc.). Visual servoing systems are generally divide depends on the number of camera, on the position of the camera with respect to the robot, on the design of the error function to robot. This project presents an approach for visual robot control. Existing approaches are increased in such a way that depth and position information of block or object is estimate during the motion of the robot. That is done by the visual tracking of an object throughout the trajectory. Vision designed robotics has been a major research area for more time. However, one of the open and commonly problems in the area is the requirement for exchange of the experiences and ideas. We also include a number of real–time examples from our own research. Forward and inverse kinematics of 3 DOF robot have been done then experiments on image processing, object shape recognition and pose estimation as well as target-block or object in Cartesian system and visual control of robot manipulator have been prescribed. Experimental results obtained from real-time system implementation of visual servo control and tests of 3DOF robot in lab

    Whole-Body MPC for a Dynamically Stable Mobile Manipulator

    Full text link
    Autonomous mobile manipulation offers a dual advantage of mobility provided by a mobile platform and dexterity afforded by the manipulator. In this paper, we present a whole-body optimal control framework to jointly solve the problems of manipulation, balancing and interaction as one optimization problem for an inherently unstable robot. The optimization is performed using a Model Predictive Control (MPC) approach; the optimal control problem is transcribed at the end-effector space, treating the position and orientation tasks in the MPC planner, and skillfully planning for end-effector contact forces. The proposed formulation evaluates how the control decisions aimed at end-effector tracking and environment interaction will affect the balance of the system in the future. We showcase the advantages of the proposed MPC approach on the example of a ball-balancing robot with a robotic manipulator and validate our controller in hardware experiments for tasks such as end-effector pose tracking and door opening

    A bank of unscented Kalman filters for multimodal human perception with mobile service robots

    Get PDF
    A new generation of mobile service robots could be ready soon to operate in human environments if they can robustly estimate position and identity of surrounding people. Researchers in this field face a number of challenging problems, among which sensor uncertainties and real-time constraints. In this paper, we propose a novel and efficient solution for simultaneous tracking and recognition of people within the observation range of a mobile robot. Multisensor techniques for legs and face detection are fused in a robust probabilistic framework to height, clothes and face recognition algorithms. The system is based on an efficient bank of Unscented Kalman Filters that keeps a multi-hypothesis estimate of the person being tracked, including the case where the latter is unknown to the robot. Several experiments with real mobile robots are presented to validate the proposed approach. They show that our solutions can improve the robot's perception and recognition of humans, providing a useful contribution for the future application of service robotics

    Computationally efficient solutions for tracking people with a mobile robot: an experimental evaluation of Bayesian filters

    Get PDF
    Modern service robots will soon become an essential part of modern society. As they have to move and act in human environments, it is essential for them to be provided with a fast and reliable tracking system that localizes people in the neighbourhood. It is therefore important to select the most appropriate filter to estimate the position of these persons. This paper presents three efficient implementations of multisensor-human tracking based on different Bayesian estimators: Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Sampling Importance Resampling (SIR) particle filter. The system implemented on a mobile robot is explained, introducing the methods used to detect and estimate the position of multiple people. Then, the solutions based on the three filters are discussed in detail. Several real experiments are conducted to evaluate their performance, which is compared in terms of accuracy, robustness and execution time of the estimation. The results show that a solution based on the UKF can perform as good as particle filters and can be often a better choice when computational efficiency is a key issue
    corecore