6 research outputs found

    Regresi linier berbasis clustering untuk deteksi dan estimasi halangan pada smart wheelchair

    Get PDF
     Penelitian ini bertujuan untuk mengusulkan sebuah pendekatan dalam mendeteksi halangan dan memperkirakan jarak halangan untuk diterapkan pada kursi roda pintar (smart wheelchair) yang dilengkapi kamera dan line laser. Kamera menangkap sinar line laser yang jatuh di depan kursi roda untuk mengenali adanya halangan pada lintasan berdasarkan bentuk citra line laser tersebut. Estimasi jarak halangan dihitung dari hasil Regresi Linier. Metode Regresi Linier yang digunakan dalam penelitian ini adalah model bertingkat dengan k-Means clustering. Metode Regresi Linier model bertingkat digunakan untuk merepresentasikan korelasi antara jarak line laser pada citra dan jarak halangan secara aktual. Hasil metode Regresi Linier model bertingkat dengan k-Means clustering yang diujicobakan memberikan hasil yang lebih baik dengan RMSE sebesar 3.541 cm dibanding dengan Regresi Liner sederhana dengan RMSE sebesar 5.367 cm.   This research aim to propose a new approach to detect obstacles and to estimate the distance of the obstacle which is in this case applied to smart wheelchair equipped with camera and line laser. The camera capture the image of line laser reflected in front of the wheelchair to detect any existing obstacle on the wheelchair’s pathway based on the line shape of reflected line laser. Obstacle’s distance is estimated using Linier Regression. Linier Regression method used in this research is stepwise model using k-Means clustering. Linear Regression method with stepwise model will be used to represent the correlation between the distance of the line laser in the image and the actual distance of the obstacle in real world. The result of Linear Regression with stepwise model using k-Means clustering gave better result with RMSE of 3.541 cm than simple Linear Regression with RMSE of 5.367 cm

    Regresi linier berbasis clustering untuk deteksi dan estimasi halangan pada smart wheelchair

    Full text link

    Field Obstacle Identification for Autonomous Tractor Applications

    Get PDF
    New technologies are being developed to meet the growing demand for agricultural products. Autonomous tractors are one of the many solutions to address this demand. Obstacle detection and avoidance is an important consideration for safe operation of any autonomous machine. Three field obstacles were chosen to be identified in this thesis work: tractors, round bales, and center pivots. Limited research work was found on the identification of center pivot detection. Feasibility of using low cost LIDARs was considered for the detection of tractors, bales, and agricultural center pivots. Performance of LIDARs in different lighting conditions, different colors of obstacles, accuracy and angular resolution was evaluated. It was found that low cost LIDARs do not have a small enough angular resolution to detect pivots at a distance to avoid the obstacle. Formulas were derived to help find the distance between steps of the LIDAR. Obstacle identification is also important so that proper corrective actions can be taken to avoid the obstacle. RGB cameras were used to aid in the detection of center pivots. SURF Feature Extraction and Matching, Viola-Jones algorithm and edge detection with a shape identification algorithm were tried but none of the algorithms could adapt to more than one orientation or class of object. Obstacle identification using Convolutional Neural Networks (CNNs) for obstacle detection was pursued. Each obstacle was individually trained first and then all classes were combined to create one object detector. Faster Region based CNN (R-CNN) was used with GoogLeNet to give high mean Average Precision (mAP). Advisor: Santosh Pitl

    TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    Get PDF
    Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality

    A Practical Obstacle Detection System for Autonomous Orchard Vehicles

    No full text
    Abstract — Safe robot navigation in tree fruit orchards requires that the vehicle be capable of robustly navigating between rows of trees and turning from one aisle to another; that the vehicle be dynamically stable, especially when carrying workers; and that the vehicle be able to detect obstacles on its way and adjust its speed accordingly. In this paper we address the latter, in particular the problem of detecting people and apple bins in the aisles between rows. One of our requirements is that the obstacle avoidance subsystem shouldn’t add to the robot’s hardware cost, so as to keep the acquisition cost to growers as low as possible. Therefore, we confine ourselves to solutions that use only the sensor suite already installed on the robot for navigation–in our case, a laser scanner, low–cost inertial measurement unit, and steering and wheel encoders. Our methodology is based on the classification and clustering of registered 3D points as obstacles. In the current implementation, obstacle avoidance takes in 3D point clouds collected in apple orchards and generates an off–line assessment of obstacle position. Tests conducted at our experimental orchard– like environment in Pittsburgh and an actual apple orchard in Washington state indicate that the method is able to detect people and bins located along the vehicle path. Stretch tests indicate that it is also capable of dealing with objects as small as 15 cm tall as long as they aren’t covered by grass, and to detect people crossing the aisles at walking speed. I

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE
    corecore