2 research outputs found

    Lokalisasi Mobile Robot berdasarkan Citra Kamera OMNI menggunakan Fitur Surf

    Get PDF
    Deteksi lokasi diri atau lokalisasi diri adalah salah satu kemampuan yang harus dimiliki oleh mobile robot. Kemampuan lokalisasi diri digunakan untuk menentukan posisi robot di suatu daerah dan sebagai referensi untuk menentukan arah perjalanan selanjutnya. Dalam penelitian ini, lokalisasi robot didasarkan pada data citra yang ditangkap oleh kamera omnidirectional tipe catadioptric. Jumlah fitur terdekat antara citra 360o yang ditangkap oleh kamera Omni dan citra referensi menjadi dasar untuk menentukan prediksi lokasi. Ekstraksi fitur gambar menggunakan metode Speeded-Up Robust Features (SURF). Kontribusi pertama dari penelitian ini adalah optimasi akurasi deteksi dengan memilih nilai Hessian Threshold dan jarak maksimum fitur yang tepat. Kontribusi kedua optimasi waktu deteksi menggunakan metode yang diusulkan. Metode ini hanya menggunakan fitur 3 gambar referensi berdasarkan hasil deteksi sebelumnya. Optimasi waktu deteksi, untuk lintasan dengan 28 gambar referensi, dapat mempersingkat waktu deteksi sebesar 8,72 kali. Pengujian metode yang diusulkan dilakukan menggunakan omnidirectional mobile robot yang berjalan di suatu daerah. Pengujian dilakukan dengan menggunakan metode recall, presisi, akurasi, F-measure, G-measure, dan waktu deteksi. Pengujian deteksi lokasi juga dilakukan berdasarkan metode SIFT untuk dibandingkan dengan metode yang diusulkan. Berdasarkan pengujian, kinerja metode yang diusulkan lebih baik daripada SIFT untuk pengukuran dengan recall 89,67%, akurasi 99,59%, F-measure 93,58%, G-measure 93,87%, dan waktu deteksi 0,365 detik. Metode SIFT hanya lebih baik pada presisi 98,74%. AbstractSelf-location detection or self-localization is one of the capabilities that must be possessed by the mobile robot. The self-localization ability is used to determine the robot position in an area and as a reference to determine the next trip direction. In this research, robot localization was by vision-data based, which was captured by catadioptric-types omnidirectional cameras. The number of closest features between the 360o image captured by the Omni camera and the reference image was the basis for determining location predictions. Image feature extraction uses the Speeded-Up Robust Features (SURF) method. The first contribution of this research is the optimization of detection accuracy by selecting the Hessian Threshold value and the maximum distance of the right features. The second contribution is the optimization of detection time using the proposed method. This method uses only the features of 3 reference images based on the previous detection results. Optimization of detection time, for trajectories with 28 reference images, can shorten the detection time by 8.72 times. Testing the proposed method was done using an omnidirectional mobile robot that walks in an area. Tests carried out using the method of recall, precision, accuracy, F-measure, G-measure, and detection time. Location detection testing was also done based on the SIFT method to be compared with the proposed method. Based on testing, the proposed method performance is better than SIFT for measurements with recall 89.67%, accuracy 99.59%, F-measure 93.58%, G-measure 93.87%, and detection time 0.365 seconds. The SIFT method is only better at precision 98.74%

    A sparse hybrid map for vision-guided mobile robots

    Get PDF
    This paper introduces a minimalistic approach to produce a visual hybrid map of a mobile robot’s working environment. The proposed system uses omnidirectional images along with odometry information to build an initial dense posegraph map. Then a two level hybrid map is extracted from the dense graph. The hybrid map consists of global and local levels. The global level contains a sparse topological map extracted from the initial graph using a dual clustering approach. The local level contains a spherical view stored at each node of the global level. The spherical views provide both an appearance signature for the nodes, which the robot uses to localize itself in the environment, and heading information when the robot uses the map for visual navigation. In order to show the usefulness of the map, an experiment was conducted where the map was used for multiple visual navigation tasks inside an office workplace
    corecore