2,766 research outputs found

    Where Am I? SLAM for Mobile Machines on a Smart Working Site

    Get PDF
    The current optimization approaches of construction machinery are mainly based on internal sensors. However, the decision of a reasonable strategy is not only determined by its intrinsic signals, but also very strongly by environmental information, especially the terrain. Due to the dynamic changing of the construction site and the consequent absence of a high definition map, the Simultaneous Localization and Mapping (SLAM) offering the terrain information for construction machines is still challenging. Current SLAM technologies proposed for mobile machines are strongly dependent on costly or computationally expensive sensors, such as RTK GPS and cameras, so that commercial use is rare. In this study, we proposed an affordable SLAM method to create a multi-layer grid map for the construction site so that the machine can have the environmental information and be optimized accordingly. Concretely, after the machine passes by the grid, we can obtain the local information and record it. Combining with positioning technology, we then create a map of the interesting places of the construction site. As a result of our research gathered from Gazebo, we showed that a suitable layout is the combination of one IMU and two differential GPS antennas using the unscented Kalman filter, which keeps the average distance error lower than 2m and the mapping error lower than 1.3% in the harsh environment. As an outlook, our SLAM technology provides the cornerstone to activate many efficiency improvement approaches. View Full-Tex

    Use of Unmanned Aerial Systems in Civil Applications

    Get PDF
    Interest in drones has been exponentially growing in the last ten years and these machines are often presented as the optimal solution in a huge number of civil applications (monitoring, agriculture, emergency management etc). However the promises still do not match the data coming from the consumer market, suggesting that the only big field in which the use of small unmanned aerial vehicles is actually profitable is the video-makers’ one. This may be explained partly with the strong limits imposed by existing (and often "obsolete") national regulations, but also - and pheraps mainly - with the lack of real autonomy. The vast majority of vehicles on the market nowadays are infact autonomous only in the sense that they are able to follow a pre-determined list of latitude-longitude-altitude coordinates. The aim of this thesis is to demonstrate that complete autonomy for UAVs can be achieved only with a performing control, reliable and flexible planning platforms and strong perception capabilities; these topics are introduced and discussed by presenting the results of the main research activities performed by the candidate in the last three years which have resulted in 1) the design, integration and control of a test bed for validating and benchmarking visual-based algorithm for space applications; 2) the implementation of a cloud-based platform for multi-agent mission planning; 3) the on-board use of a multi-sensor fusion framework based on an Extended Kalman Filter architecture

    Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion

    Get PDF
    Agricultural mobile robots have great potential to effectively implement different agricultural tasks. They can save human labour costs, avoid the need for people having to perform risky operations and increase productivity. Automation and advanced sensing technologies can provide up-to-date information that helps farmers in orchard management. Data collected from on-board sensors on a mobile robot provide information that can help the farmer detect tree or fruit diseases or damage, measure tree canopy volume and monitor fruit development. In orchards, trees are natural landmarks providing suitable cues for mobile robot localisation and navigation as trees are nominally planted in straight and parallel rows. This thesis presents a novel tree trunk detection algorithm that detects trees and discriminates between trees and non-tree objects in the orchard using a camera and 2D laser scanner data fusion. A local orchard map of the individual trees was developed allowing the mobile robot to navigate to a specific tree in the orchard to perform a specific task such as tree inspection. Furthermore, this thesis presents a localisation algorithm that does not rely on GPS positions and depends only on the on-board sensors of the mobile robot without adding any artificial landmarks, respective tapes or tags to the trees. The novel tree trunk detection algorithm combined the features extracted from a low cost camera's images and 2D laser scanner data to increase the robustness of the detection. The developed algorithm used a new method to detect the edge points and determine the width of the tree trunks and non-tree objects from the laser scan data. Then a projection of the edge points from the laser scanner coordinates to the image plane was implemented to construct a region of interest with the required features for tree trunk colour and edge detection. The camera images were used to verify the colour and the parallel edges of the tree trunks and non-tree objects. The algorithm automatically adjusted the colour detection parameters after each test which was shown to increase the detection accuracy. The orchard map was constructed based on tree trunk detection and consisted of the 2D positions of the individual trees and non-tree objects. The map of the individual trees was used as an a priority map for mobile robot localisation. A data fusion algorithm based on an Extended Kalman filter was used for pose estimation of the mobile robot in different paths (midway between rows, close to the rows and moving around trees in the row) and different turns (semi-circle and right angle turns) required for tree inspection tasks. The 2D positions of the individual trees were used in the correction step of the Extended Kalman filter to enhance localisation accuracy. Experimental tests were conducted in a simulated environment and a real orchard to evaluate the performance of the developed algorithms. The tree trunk detection algorithm was evaluated under two broad illumination conditions (sunny and cloudy). The algorithm was able to detect the tree trunks (regular and thin tree trunks) and discriminate between trees and non-tree objects with a detection accuracy of 97% showing that the fusion of both vision and 2D laser scanner technologies produced robust tree trunk detection. The mapping method successfully localised all the trees and non-tree objects of the tested tree rows in the orchard environment. The mapping results indicated that the constructed map can be reliably used for mobile robot localisation and navigation. The localisation algorithm was evaluated against the logged RTK-GPS positions for different paths and headland turns. The average of the RMS of the position error in x, y coordinates and Euclidean distance were 0.08 m, 0.07 m and 0.103 m respectively, whilst the average of the RMS of the heading error was 3:32°. These results were considered acceptable while driving along the rows and when executing headland turns for the target application of autonomous mobile robot navigation and tree inspection tasks in orchards

    A Robotic System for In-Situ Measurement of Soil Total Carbon and Nitrogen

    Get PDF
    Surges in the cost of fertilizer in recent times coupled with the environmental effects of their over-application have driven the need for farmers to optimize the amount of fertilizer they apply on the farm. One of the key steps in determining the right amount of fertilizer to apply in a given field is measuring the amount of nutrients present in the soil. To ascertain nutrient deficiencies, most farmers perform wet chemistry analysis of soil samples which requires a lot of time and is expensive. In this research project, a robotic system was designed and developed that could autonomously move to predetermined GPS waypoints and estimate total carbon (TC) and total nitrogen (TN) content in the soil in-situ using visible and near-infrared reflectance spectroscopy - a faster and cheaper method to determine soil nutrients in real-time. For the locomotion of the robotic system, a Husky robotic platform by Clearpath Robotics was used. A Gen2 robotic arm by Kinova Robotics was used for the precise positioning of the probe in taking soil spectral measurement. The probe was custom designed and built to be used in conjunction with the robotic arm as an end-effector. Two lightweight and inexpensive spectrometers by OceanInsight, namely, Flame VisNIR and Flame NIR+, were used to capture the spectral signatures of soil. The prediction was done with a spectroscopic calibration model and External Parameter Orthogonalization (EPO) was applied to remove the moisture effect from the soil spectra. The robotic system was tested at University of Nebraska-Lincoln (UNL) NU-Spidercam phenotyping facility. Two sets of spectra were obtained from the field campaign: in-situ and dry-ground spectra. The dry-ground spectra were used as library scans and Partial Least Square Regression (PLSR) was used for the modeling. The in-situ spectra were randomly divided into EPO calibration and validation sets. Satisfactory results were obtained from the initial prediction on dry-ground validation set, with R2 (coefficient of determination) of 0.77 and RMSE (Root Mean Squared Error) of 0.15% for TC and R2 of 0.64 and RMSE of 171 ppm for TN. There was a reduction in R2 and an increase in RMSE values for both TC and TN when prediction was done directly on the in-situ validation set. For TC, the R2 dropped and RMSE increased to 0.25 and 0.29% respectively, and for TN, the R2 dropped and RMSE increased to 0.19 and 259 ppm respectively. This was primarily due to the presence of moisture in the field samples. The R2 increased to 0.62 and RMSE decreased to 0.2% for TC, and the R2 increased to 0.51 and RMSE decreased to 200 ppm for TN, when EPO was applied on both the in-situ validation and dry-ground sets. These findings highlight the importance of accounting for moisture effects in the prediction of soil properties using the robotic system and demonstrate the potential of the system in enabling soil monitoring and analysis in-situ. Advisor: Yufeng G

    Assessment of simulated and real-world autonomy performance with small-scale unmanned ground vehicles

    Get PDF
    Off-road autonomy is a challenging topic that requires robust systems to both understand and navigate complex environments. While on-road autonomy has seen a major expansion in recent years in the consumer space, off-road systems are mostly relegated to niche applications. However, these applications can provide safety and navigation to dangerous areas that are the most suited for autonomy tasks. Traversability analysis is at the core of many of the algorithms employed in these topics. In this thesis, a Clearpath Robotics Jackal vehicle is equipped with a 3D Ouster laser scanner to define and traverse off-road environments. The Mississippi State University Autonomous Vehicle Simulator (MAVS) and the Navigating All Terrains Using Robotic Exploration (NATURE) autonomy stack are used in conjunction with the small-scale vehicle platform to traverse uneven terrain and collect data. Additionally, the NATURE stack is used as a point of comparison between a MAVS simulated and physical Clearpath Robotics Jackal vehicle in testing

    INTEGRATION OF THE SIMULATION ENVIRONMENT FOR AUTONOMOUS ROBOTS WITH ROBOTICS MIDDLEWARE

    Get PDF
    Robotic simulators have long been used to test code and designs before any actual hardware is tested to ensure safety and efficiency. Many current robotics simulators are either closed source (calling into question the fidelity of their simulations) or are very complicated to install and use. There is a need for software that provides good quality simulation as well as being easy to use. Another issue arises when moving code from the simulator to actual hardware. In many cases, the code must be changed drastically to accommodate the final hardware on the robot, which can possibly invalidate aspects of the simulation. This defense describes methods and techniques for developing high fidelity graphical and physical simulation of autonomous robotic vehicles that is simple to use as well as having minimal distinction between simulated hardware, and actual hardware. These techniques and methods were proven by the development of the Simulation Environment for Autonomous Robots (SEAR) described here. SEAR is a 3-dimensional open source robotics simulator written by Adam Harris in Java that provides high fidelity graphical and physical simulations of user-designed vehicles running user-defined code in user-designed virtual terrain. Multiple simulated sensors are available and include a GPS, triple axis accelerometer, triple axis gyroscope, a compass with declination calculation, LIDAR, and a class of distance sensors that includes RADAR, SONAR, Ultrasonic and infrared. Several of these sensors have been validated against real-world sensors and other simulation software

    Robotic navigation and inspection of bridge bearings

    Get PDF
    This thesis focuses on the development of a robotic platform for bridge bearing inspection. The existing literature on this topic highlights an aspiration for increased automation of bridge inspection, due to an increasing amount of ageing infrastructure and costly inspection. Furthermore, bridge bearings are highlighted as being one of the most costly components of the bridge to maintain. However, although autonomous robotic inspection is often stated as an aspiration, the existing literature for robotic bridge inspection often neglects to include the requirement of autonomous navigation. To achieve autonomous inspection, some methods for mapping and localising in the bridge structure are required. This thesis compares existing methods for simultaneous localisation and mapping (SLAM) with localisation-only methods. In addition, a method for using pre-existing data to create maps for localisation is proposed. A robotic platform was developed and these methods for localisation and mapping were then compared in a laboratory environment and then in a real bridge environment. The errors in the bridge environment are greater than in the laboratory environment, but remained within a defined error bound. A combined approach is suggested as an appropriate method for combining the lower errors of a SLAM approach with the advantages of a localisation approach for defining existing goals. Longer-term testing in a real bridge environment is still required. The use of existing inspection data is then extended to the creation of a simulation environment, with the goal of creating a methodology for testing different configurations of bridges or robots in a more realistic environment than laboratory testing, or other existing simulation environments. Finally, the inspection of the structure surrounding the bridge bearing is considered, with a particular focus on the detection and segmentation of cracks in concrete. A deep learning approach is used to segment cracks from an existing dataset and compared to an existing machine learning approach, with the deep-learning approach achieving a higher performance using a pixel-based evaluation. Other evaluation methods were also compared that take the structure of the crack, and other related datasets, into account. The generalisation of the approach for crack segmentation is evaluated by comparing the results of the trained on different datasets. Finally, recommendations for improving the datasets to allow better comparisons in future work is given

    The MRS UAV System: Pushing the Frontiers of Reproducible Research, Real-world Deployment, and Education with Autonomous Unmanned Aerial Vehicles

    Full text link
    We present a multirotor Unmanned Aerial Vehicle control (UAV) and estimation system for supporting replicable research through realistic simulations and real-world experiments. We propose a unique multi-frame localization paradigm for estimating the states of a UAV in various frames of reference using multiple sensors simultaneously. The system enables complex missions in GNSS and GNSS-denied environments, including outdoor-indoor transitions and the execution of redundant estimators for backing up unreliable localization sources. Two feedback control designs are presented: one for precise and aggressive maneuvers, and the other for stable and smooth flight with a noisy state estimate. The proposed control and estimation pipeline are constructed without using the Euler/Tait-Bryan angle representation of orientation in 3D. Instead, we rely on rotation matrices and a novel heading-based convention to represent the one free rotational degree-of-freedom in 3D of a standard multirotor helicopter. We provide an actively maintained and well-documented open-source implementation, including realistic simulation of UAV, sensors, and localization systems. The proposed system is the product of years of applied research on multi-robot systems, aerial swarms, aerial manipulation, motion planning, and remote sensing. All our results have been supported by real-world system deployment that shaped the system into the form presented here. In addition, the system was utilized during the participation of our team from the CTU in Prague in the prestigious MBZIRC 2017 and 2020 robotics competitions, and also in the DARPA SubT challenge. Each time, our team was able to secure top places among the best competitors from all over the world. On each occasion, the challenges has motivated the team to improve the system and to gain a great amount of high-quality experience within tight deadlines.Comment: 28 pages, 20 figures, submitted to Journal of Intelligent & Robotic Systems (JINT), for the provided open-source software see http://github.com/ctu-mr
    • …
    corecore