2 research outputs found

    Lokalisasi Mobile Robot berdasarkan Citra Kamera OMNI menggunakan Fitur Surf

    Get PDF
    Deteksi lokasi diri atau lokalisasi diri adalah salah satu kemampuan yang harus dimiliki oleh mobile robot. Kemampuan lokalisasi diri digunakan untuk menentukan posisi robot di suatu daerah dan sebagai referensi untuk menentukan arah perjalanan selanjutnya. Dalam penelitian ini, lokalisasi robot didasarkan pada data citra yang ditangkap oleh kamera omnidirectional tipe catadioptric. Jumlah fitur terdekat antara citra 360o yang ditangkap oleh kamera Omni dan citra referensi menjadi dasar untuk menentukan prediksi lokasi. Ekstraksi fitur gambar menggunakan metode Speeded-Up Robust Features (SURF). Kontribusi pertama dari penelitian ini adalah optimasi akurasi deteksi dengan memilih nilai Hessian Threshold dan jarak maksimum fitur yang tepat. Kontribusi kedua optimasi waktu deteksi menggunakan metode yang diusulkan. Metode ini hanya menggunakan fitur 3 gambar referensi berdasarkan hasil deteksi sebelumnya. Optimasi waktu deteksi, untuk lintasan dengan 28 gambar referensi, dapat mempersingkat waktu deteksi sebesar 8,72 kali. Pengujian metode yang diusulkan dilakukan menggunakan omnidirectional mobile robot yang berjalan di suatu daerah. Pengujian dilakukan dengan menggunakan metode recall, presisi, akurasi, F-measure, G-measure, dan waktu deteksi. Pengujian deteksi lokasi juga dilakukan berdasarkan metode SIFT untuk dibandingkan dengan metode yang diusulkan. Berdasarkan pengujian, kinerja metode yang diusulkan lebih baik daripada SIFT untuk pengukuran dengan recall 89,67%, akurasi 99,59%, F-measure 93,58%, G-measure 93,87%, dan waktu deteksi 0,365 detik. Metode SIFT hanya lebih baik pada presisi 98,74%. AbstractSelf-location detection or self-localization is one of the capabilities that must be possessed by the mobile robot. The self-localization ability is used to determine the robot position in an area and as a reference to determine the next trip direction. In this research, robot localization was by vision-data based, which was captured by catadioptric-types omnidirectional cameras. The number of closest features between the 360o image captured by the Omni camera and the reference image was the basis for determining location predictions. Image feature extraction uses the Speeded-Up Robust Features (SURF) method. The first contribution of this research is the optimization of detection accuracy by selecting the Hessian Threshold value and the maximum distance of the right features. The second contribution is the optimization of detection time using the proposed method. This method uses only the features of 3 reference images based on the previous detection results. Optimization of detection time, for trajectories with 28 reference images, can shorten the detection time by 8.72 times. Testing the proposed method was done using an omnidirectional mobile robot that walks in an area. Tests carried out using the method of recall, precision, accuracy, F-measure, G-measure, and detection time. Location detection testing was also done based on the SIFT method to be compared with the proposed method. Based on testing, the proposed method performance is better than SIFT for measurements with recall 89.67%, accuracy 99.59%, F-measure 93.58%, G-measure 93.87%, and detection time 0.365 seconds. The SIFT method is only better at precision 98.74%

    Reliable and Fast Localization in Ambiguous Environments Using Ambiguity Grid Map

    No full text
    In real-world robotic navigation, some ambiguous environments contain symmetrical or featureless areas that may cause the perceptual aliasing of external sensors. As a result of that, the uncorrected localization errors will accumulate during the localization process, which imposes difficulties to locate a robot in such a situation. Using the ambiguity grid map (AGM), we address this problem by proposing a novel probabilistic localization method, referred to as AGM-based adaptive Monte Carlo localization. AGM has the capacity of evaluating the environmental ambiguity with average ambiguity error and estimating the possible localization error at a given pose. Benefiting from the constructed AGM, our localization method is derived from an improved Dynamic Bayes network to reason about the robot’s pose as well as the accumulated localization error. Moreover, a portal motion model is presented to achieve more reliable pose prediction without time-consuming implementation, and thus the accumulated localization error can be corrected immediately when the robot moving through an ambiguous area. Simulation and real-world experiments demonstrate that the proposed method improves localization reliability while maintains efficiency in ambiguous environments
    corecore