2 research outputs found

    Automatic Detection of Wrecked Airplanes from UAV Images

    Get PDF
    Searching the accident site of a missing airplane is the primary step taken by the search and rescue team before rescuing the victims. However, due to the vast exploration area, lack of technology, no access road, and rough terrain make the search process nontrivial and thus causing much delay in handling the victims. Therefore, this paper aims to develop an automatic wrecked airplane detection system using visual information taken from aerial images such as from a camera. A new deep network is proposed to distinguish robustly the wrecked airplane that has high pose, scale, color variation, and high deformable object. The network leverages the last layers to capture more abstract and semantics information for robust wrecked airplane detection. The network is intertwined by adding more extra layers connected at the end of the layers. To reduce missing detection which is crucial for wrecked airplane detection, an image is then composed into five patches going feed-forwarded to the net in a convolutional manner. Experiments show very well that the proposed method successfully reaches AP=91.87%, and we believe it could bring many benefits for the search and rescue team for accelerating the searching of wrecked airplanes and thus reducing the number of victims

    Path Planning Based On Deep Reinforcement Learning In Human-Robot Collaboration

    No full text
    Visual navigation is required for many robotics applications, from mobile robotics for motion manipulation to automated driving. One of the most frequently used visual navigation technologies is path planning. This method considers finding a valid configuration sequence to move from the starting point to the destination point. Deep reinforcement learning (DRL) provides a trainable, mapless approach by integrating path planning, localization, and image processing in a single module. Therefore, the approach can be optimized for a particular environment. However, DRL-based navigation is mostly validated in simple simulation environments with modest sizes. Therefore, this research develops a new visual navigation architecture method using deep reinforcement learning. A realistic simulation framework has been designed that resembles a room with several models of objects in it. Agents in the simulator will carry out the learning process by implementing deep reinforcement learning on path planning with the support of A2C networks, LSTM and auxiliary tasks. The method on the agent was evaluated in a simulation framework which was carried out 10 times, and each experiment was carried out in 1000 randomly generated environments. Training takes around 18 hours on a single GPU. The results obtained are that in a wider simulation environment, the method in this study has a 99.81% success rate in finding the purpose of a particular image. These results make the proposed method applicable to a wider range of environments and this approach can be used for human-robot collaboration. ============================================================================================================================== Navigasi visual diperlukan untuk banyak aplikasi robotika, mulai dari mobile robotics untuk manipulasi gerakan hingga automated driving. Salah satu teknologi navigasi visual yang sering digunakan adalah path planning. Metode ini mempertimbangkan cara menemukan urutan konfigurasi yang valid untuk berpindah dari titik awal ke titik tujuan. Deep reinforcement learning (DRL) menyediakan pendekatan mapless yang dapat dilatih dengan mengintegrasikan path planning, localization, dan pemrosesan gambar dalam satu modul. Oleh karena itu, pendekatan dapat dioptimalkan untuk lingkungan tertentu. Namun, navigasi berbasis DRL sebagian besar divalidasi dalam lingkungan simulasi sederhana dengan ukuran yang tidak terlalu besar. Oleh karena itu, dalam penelitian ini mengembangkan metode arsitektur navigasi visual baru menggunakan deep reinforcement learning. Telah dirancang framework simulasi realistis yang menyerupai keadaan ruangan dengan beberapa model barang di dalamnya. Agen dalam simulator akan melakukan proses pembelajaran dengan menerapkan deep reinforcement learning pada path planning dengan dukungan jaringan A2C, LSTM dan auxiliary tasks. Metode pada agen dievaluasi dalam framework simulasi yang dilakukan sebanyak 10 kali percobaan, dan setiap percobaan dilakukan di 1000 lingkungan yang dibuat secara acak. Pelatihan memakan waktu sekitar 18 jam pada satu GPU. Hasil yang didapatkan bahwa dalam lingkungan simulasi yang lebih luas, metode dalam penelitian ini memiliki tingkat keberhasilan 99,81% dalam menemukan tujuan dari gambar tertentu. Hasil ini membuat metode yang diusulkan dapat diterapkan pada lingkungan yang lebih luas dan pendekatan ini dapat digunakan untuk kolaborasi manusia dengan robot
    corecore