3,975 research outputs found

    A path planning and path-following control framework for a general 2-trailer with a car-like tractor

    Full text link
    Maneuvering a general 2-trailer with a car-like tractor in backward motion is a task that requires significant skill to master and is unarguably one of the most complicated tasks a truck driver has to perform. This paper presents a path planning and path-following control solution that can be used to automatically plan and execute difficult parking and obstacle avoidance maneuvers by combining backward and forward motion. A lattice-based path planning framework is developed in order to generate kinematically feasible and collision-free paths and a path-following controller is designed to stabilize the lateral and angular path-following error states during path execution. To estimate the vehicle state needed for control, a nonlinear observer is developed which only utilizes information from sensors that are mounted on the car-like tractor, making the system independent of additional trailer sensors. The proposed path planning and path-following control framework is implemented on a full-scale test vehicle and results from simulations and real-world experiments are presented.Comment: Preprin

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    2D laser-based probabilistic motion tracking in urban-like environments

    Get PDF
    All over the world traffic injuries and fatality rates are increasing every year. The combination of negligent and imprudent drivers, adverse road and weather conditions produces tragic results with dramatic loss of life. In this scenario, the use of mobile robotics technology onboard vehicles could reduce casualties. Obstacle motion tracking is an essential ability for car-like mobile robots. However, this task is not trivial in urban environments where a great quantity and variety of obstacles may induce the vehicle to take erroneous decisions. Unfortunately, obstacles close to its sensors frequently cause blind zones behind them where other obstacles could be hidden. In this situation, the robot may lose vital information about these obstructed obstacles that can provoke collisions. In order to overcome this problem, an obstacle motion tracking module based only on 2D laser scan data was developed. Its main parts consist of obstacle detection, obstacle classification, and obstacle tracking algorithms. A motion detection module using scan matching was developed aiming to improve the data quality for navigation purposes; a probabilistic grid representation of the environment was also implemented. The research was initially conducted using a MatLab simulator that reproduces a simple 2D urban-like environment. Then the algorithms were validated using data samplings in real urban environments. On average, the results proved the usefulness of considering obstacle paths and velocities while navigating at reasonable computational costs. This, undoubtedly, will allow future controllers to obtain a better performance in highly dynamic environments.Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES

    Long-Term Localization for Self-Driving Cars

    Get PDF
    Long-term localization is hard due to changing conditions, while relative localization within time sequences is much easier. To achieve long-term localization in a sequential setting, such as, for self-driving cars, relative localization should be used to the fullest extent, whenever possible.This thesis presents solutions and insights both for long-term sequential visual localization, and localization using global navigational satellite systems (GNSS), that push us closer to the goal of accurate and reliable localization for self-driving cars. It addresses the question: How to achieve accurate and robust, yet cost-effective long-term localization for self-driving cars?Starting in this question, the thesis explores how existing sensor suites for advanced driver-assistance systems (ADAS) can be used most efficiently, and how landmarks in maps can be recognized and used for localization even after severe changes in appearance. The findings show that:* State-of-the-art ADAS sensors are insufficient to meet the requirements for localization of a self-driving car in less than ideal conditions.GNSS and visual localization are identified as areas to improve.\ua0* Highly accurate relative localization with no convergence delay is possible by using time relative GNSS observations with a single band receiver, and no base stations.\ua0* Sequential semantic localization is identified as a promising focus point for further research based on a benchmark study comparing state-of-the-art visual localization methods in challenging autonomous driving scenarios including day-to-night and seasonal changes.\ua0* A novel sequential semantic localization algorithm improves accuracy while significantly reducing map size compared to traditional methods based on matching of local image features.\ua0* Improvements for semantic segmentation in challenging conditions can be made efficiently by automatically generating pixel correspondences between images from a multitude of conditions and enforcing a consistency constraint during training.\ua0* A segmentation algorithm with automatically defined and more fine-grained classes improves localization performance.\ua0* The performance advantage seen in single image localization for modern local image features, when compared to traditional ones, is all but erased when considering sequential data with odometry, thus, encouraging to focus future research more on sequential localization, rather than pure single image localization

    Magnetic suspension turbine flow meter

    Get PDF
    Measurement of liquid flow in certain area such as industrial plant is in critical. Inaccurate measurement can cause serious result. Most of the liquid flow are using Bernoulli principle‘s but in turbine flow meter the flow rate is determine differently by using kinetic energy. Turbine flow meter is one of flow rate transducer that widely used in metallurgical, petroleum, chemical and other industrial and agricultural areas, as shown in Figure 1.1. It is present as high precision of flow meter and when fluid flow troughs it the impeller that faces the fluid will rotate due to flow force exist. The rotation speed is directly proportional to the speed of fluid. During the process, the working states of impeller and bearing are very complicated due the interactive effects from the fluid axial thrust, impeller rotating, and static and dynamic components. In current turbine flow meter design, the common material use for meter bulk body is 1Cr18Ni9Ti, while for the blade 2Gr13 are used. Axis and bearing are made from stainless steel or carbide alloy. The space between the axis and bearing determines it minimum flow rate and life span, and also determines its measurement range (1:10~1:15 - maximum flow rate to minimum flow rate). Since the turbine has movable parts it can produce friction between the axis and ring during the operation. This will cause accuracy of the measurement decrease and can damage the impeller blade. In this research, the friction can be reduced by adopting the principle of magnetic suspension. Rotating shaft will levitate in the magnetic field due to the forces. Friction coefficient reduced because of rotating shaft rotates without abrasion and mechanical contact in space
    corecore