393 research outputs found

    Unmanned Ground Vehicles for Smart Farms

    Get PDF
    Forecasts of world population increases in the coming decades demand new production processes that are more efficient, safer, and less destructive to the environment. Industries are working to fulfill this mission by developing the smart factory concept. The agriculture world should follow industry leadership and develop approaches to implement the smart farm concept. One of the most vital elements that must be configured to meet the requirements of the new smart farms is the unmanned ground vehicles (UGV). Thus, this chapter focuses on the characteristics that the UGVs must have to function efficiently in this type of future farm. Two main approaches are discussed: automating conventional vehicles and developing specifically designed mobile platforms. The latter includes both wheeled and wheel-legged robots and an analysis of their adaptability to terrain and crops

    Modeling and Control of the UGV Argo J5 with a Custom-Built Landing Platform

    Get PDF
    This thesis aims to develop a detailed dynamic model and implement several navigation controllers for path tracking and dynamic self-leveling of the Argo J5 Unmanned Ground Vehicle (UGV) with a custom-built landing platform. The overall model is derived by combining the Argo J5 driveline system with the wheelsterrain interaction (using terramechanics theory and mobile robot kinetics), while the landing platform model follows the Euler-Lagrange formulation. Different controllers are, then, derived, implemented to demonstrate: i.) self-leveling accuracy of the landing platform, ii.) trajectory tracking capabilities of the Argo J5 when moving in uneven terrains. The novelty of the Argo J5 model is the addition of a vertical load on each wheel through derivation of the shear stress depending on the point’s position in 3D space on each wheel. Static leveling of the landing platform within one degree of the horizon is evaluated by implementing Proportional Derivative (PD), Proportional Integral Derivative (PID), Linear Quadratic Regulator (LQR), feedback linearization, and Passivity Based Adaptive Controller (PBAC) techniques. A PD controller is used to evaluate the performance of the Argo J5 on different terrains. Further, for the Argo J5 - landing platform ensemble, PBAC and Neural Network Based Adaptive Controller (NNBAC) are derived and implemented to demonstrate dynamic self-leveling. The emphasis is on different controller implementation for complex real systems such as Argo J5 - Landing platform. Results, obtained via extensive simulation studies in a Matlab/Simulink environment that consider real system parameters and hardware limitations, contribute to understanding navigation performance in a variety of terrains with unknown properties and illustrate the Argo J5 velocity, wheel rolling resistance, wheel turning resistance and shear stress on different terrains

    Coordinated Landing and Mapping with Aerial and Ground Vehicle Teams

    Get PDF
    Micro Umanned Aerial Vehicle~(UAV) and Umanned Ground Vehicle~(UGV) teams present tremendous opportunities in expanding the range of operations for these vehicles. An effective coordination of these vehicles can take advantage of the strengths of both, while mediate each other's weaknesses. In particular, a micro UAV typically has limited flight time due to its weak payload capacity. To take advantage of the mobility and sensor coverage of a micro UAV in long range, long duration surveillance mission, a UGV can act as a mobile station for recharging or battery swap, and the ability to perform autonomous docking is a prerequisite for such operations. This work presents an approach to coordinate an autonomous docking between a quadrotor UAV and a skid-steered UGV. A joint controller is designed to eliminate the relative position error between the vehicles. The controller is validated in simulations and successful landing is achieved in indoor environment, as well as outdoor settings with standard sensors and real disturbances. Another goal for this work is to improve the autonomy of UAV-UGV teams in positioning denied environments, a very common scenarios for many robotics applications. In such environments, Simultaneous Mapping and Localization~(SLAM) capability is the foundation for all autonomous operations. A successful SLAM algorithm generates maps for path planning and object recognition, while providing localization information for position tracking. This work proposes an SLAM algorithm that is capable of generating high fidelity surface model of the surrounding, while accurately estimating the camera pose in real-time. This algorithm improves on a clear deficiency of its predecessor in its ability to perform dense reconstruction without strict volume limitation, enabling practical deployment of this algorithm on robotic systems

    Quantitative Analysis of Non-Linear Probabilistic State Estimation Filters for Deployment on Dynamic Unmanned Systems

    Get PDF
    The work conducted in this thesis is a part of an EU Horizon 2020 research initiative project known as DigiArt. This part of the DigiArt project presents and explores the design, formulation and implementation of probabilistically orientated state estimation algorithms with focus towards unmanned system positioning and three-dimensional (3D) mapping. State estimation algorithms are considered an influential aspect of any dynamic system with autonomous capabilities. Possessing the ability to predictively estimate future conditions enables effective decision making and anticipating any possible changes in the environment. Initial experimental procedures utilised a wireless ultra-wide band (UWB) based communication network. This system functioned through statically situated beacon nodes used to localise a dynamically operating node. The simultaneous deployment of this UWB network, an unmanned system and a Robotic Total Station (RTS) with active and remote tracking features enabled the characterisation of the range measurement errors associated with the UWB network. These range error metrics were then integrated into an Range based Extended Kalman Filter (R-EKF) state estimation algorithm with active outlier identification to outperform the native approach used by the UWB system for two-dimensional (2D) pose estimation.The study was then expanded to focus on state estimation in 3D, where a Six Degreeof-Freedom EKF (6DOF-EKF) was designed using Light Detection and Ranging (LiDAR) as its primary observation source. A two step method was proposed which extracted information between consecutive LiDAR scans. Firstly, motion estimation concerning Cartesian states x, y and the unmanned system’s heading (ψ) was achieved through a 2D feature matching process. Secondly, the extraction and alignment of ground planes from the LiDAR scan enabled motion extraction for Cartesian position z and attitude angles roll (Ξ) and pitch (φ). Results showed that the ground plane alignment failed when two scans were at 10.5◩ offset. Therefore, to overcome this limitation an Error State Kalman Filter (ES-KF) was formulated and deployed as a sub-system within the 6DOF-EKF. This enabled the successful tracking of roll, pitch and the calculation of z. The 6DOF-EKF was seen to outperform the R-EKF and the native UWB approach, as it was much more stable, produced less noise in its position estimations and provided 3D pose estimation

    An Intelligent Human-Tracking Robot Based-on Kinect Sensor

    Get PDF
    This thesis provides an indoor human-tracking robot, which is also able to control other electrical devices for the user. The overall experimental setup consists of a skid-steered mobile robot, Kinect sensor, laptop, wide-angle camera and two lamps. The Kinect sensor is mounted on the mobile robot to collect position and skeleton data of the user in real time and sends it to the laptop. The laptop processes these data and then sends commands to the robot and the lamps. The wide-angle camera is mounted on the ceiling to verify the tracking performance of the Kinect sensor. A C++ program runs the camera, and a java program is used to process the data from the C++ program and the Kinect sensor and then sends the commands to the robot and the lamps. The human-tracking capability is realized by two decoupled feedback controllers for linear and rotational motions. Experimental results show that although there are small delays (0.5 s for linear motion and 1.5 s for rotational motion) and steady-state errors (0.1 m for linear motion and 1.5° for rotational motion), tests show that they are acceptable since the delays and errors do not cause the tracking distance or angle out of the desirable range (±0.05m and ± 10° of the reference input) and the tracking algorithm is robust. There are four gestures designed for the user to control the robot, two switch-mode gestures, lamp crate gesture, and lamp-selection and color change gesture. Success rates of gestures recognition are more than 90% within the detectable range of the Kinect sensor

    Kinematics Based Visual Localization for Skid-Steering Robots: Algorithm and Theory

    Full text link
    To build commercial robots, skid-steering mechanical design is of increased popularity due to its manufacturing simplicity and unique mechanism. However, these also cause significant challenges on software and algorithm design, especially for pose estimation (i.e., determining the robot's rotation and position), which is the prerequisite of autonomous navigation. While the general localization algorithms have been extensively studied in research communities, there are still fundamental problems that need to be resolved for localizing skid-steering robots that change their orientation with a skid. To tackle this problem, we propose a probabilistic sliding-window estimator dedicated to skid-steering robots, using measurements from a monocular camera, the wheel encoders, and optionally an inertial measurement unit (IMU). Specifically, we explicitly model the kinematics of skid-steering robots by both track instantaneous centers of rotation (ICRs) and correction factors, which are capable of compensating for the complexity of track-to-terrain interaction, the imperfectness of mechanical design, terrain conditions and smoothness, and so on. To prevent performance reduction in robots' lifelong missions, the time- and location- varying kinematic parameters are estimated online along with pose estimation states in a tightly-coupled manner. More importantly, we conduct in-depth observability analysis for different sensors and design configurations in this paper, which provides us with theoretical tools in making the correct choice when building real commercial robots. In our experiments, we validate the proposed method by both simulation tests and real-world experiments, which demonstrate that our method outperforms competing methods by wide margins.Comment: 18 pages in tota

    Preliminary laboratory test on navigation accuracy of an autonomous robot for measuring air quality in livestock buildings

    Get PDF
    Air quality in many poultry buildings is less than desirable. However, the measurement of concentrations of airborne pollutants in livestock buildings is generally quite difficult. To counter this, the development of an autonomous robot that could collect key environmental data continuously in livestock buildings was initiated. This research presents a specific part of the larger study that focused on the preliminary laboratory test for evaluating the navigation precision of the robot being developed under the different ground surface conditions and different localization algorithm according internal sensors. The construction of the robot was such that each wheel of the robot was driven by an independent DC motor with four odometers fixed on each motor. The inertial measurement unit (IMU) was rigidly fixed on the robot vehicle platform. The research focused on using the internal sensors to calculate the robot position (x, y, ξ) through three different methods. The first method relied only on odometer dead reckoning (ODR), the second method was the combination of odometer and gyroscope data dead reckoning (OGDR) and the last method was based on Kalman filter data fusion algorithm (KFDF). A series of tests were completed to generate the robot’s trajectory and analyse the localisation accuracy. These tests were conducted on different types of surfaces and path profiles. The results proved that the ODR calculation of the position of the robot is inaccurate due to the cumulative errors and the large deviation of the heading angle estimate. However, improved use of the gyroscope data of the IMU sensor improved the accuracy of the robot heading angle estimate. The KFDF calculation resulted in a better heading angle estimate than the ODR or OGDR calculations. The ground type was also found to be an influencing factor of localisation errors

    A State Estimation Approach for a Skid-Steered Off-Road Mobile Robot

    Get PDF
    This thesis presents a novel state estimation structure, a hybrid extended Kalman filter/Kalman filter developed for a skid-steered, six-wheeled, ARGO¼ all-terrain vehicle (ATV). The ARGO ATV is a teleoperated unmanned ground vehicle (UGV) custom fitted with an inertial measurement unit, wheel encoders and a GPS. In order to enable the ARGO for autonomous applications, the proposed hybrid EKF/KF state estimator strategy is combined with the vehicle’s sensor measurements to estimate key parameters for the vehicle. Field experiments in this thesis reveal that the proposed estimation structure is able to estimate the position, velocity, orientation, and longitudinal slip of the ARGO with a reasonable amount of accuracy. In addition, the proposed estimation structure is well-suited for online applications and can incorporate offline virtual GPS data to further improve the accuracy of the position estimates. The proposed estimation structure is also capable of estimating the longitudinal slip for every wheel of the ARGO, and the slip results align well with the motion estimate findings
    • 

    corecore