4 research outputs found

    遺伝的アルゴリズムと深層学習を用いた道路検出に基づく車いすロボットシステム

    Get PDF
    富山大学・富理工博甲第155号・K.M.IBRAHIM KHALILULLAH・2019/03/26富山大学201

    Deep Learning Based Methods for Outdoor Robot Localization and Navigation

    Get PDF
    The number of elderly people is increasing around the globe. In order to support the growing of ageing society, mobile robot is one of viable choices for assisting the elders in their daily activities. These activities happen in any places, either indoor or outdoor. Although outdoor activities benefit the elders in many ways, outdoor environments contain difficulties from their unpredictable natures. Mobile robots for supporting humans in outdoor environments must automatically traverse through various difficulties in the environments using suitable navigation systems.Core components of mobile robots always include the navigation segments. Navigation system helps guiding the robot to its destination where it can perform its designated tasks. There are various tools to be chosen for navigation systems. Outdoor environments are mostly open for conventional navigation tools such as Global Positioning System (GPS) devices. In this thesis three systems for localization and navigation of mobile robots based on visual data and deep learning algorithms are proposed. The first localization system is based on landmark detection. The Faster Regional-Convolutional Neural Network (Faster R-CNN) detects landmarks and signs in the captured image. A Feed-Forward Neural Network (FFNN) is trained to determine robot location coordinates and compass orientation from detected landmarks. The dataset consists of images, geolocation data and labeled bounding boxes to train and test two proposed localization methods. Results are illustrated with absolute errors from the comparisons between localization results and reference geolocation data in the dataset. The second system is the navigation system based on visual data and a deep reinforcement learning algorithm called Deep Q Network (DQN). The employed DQN automatically guides the mobile robot with visual data in the form of images, which received from the only Universal Serial Bus (USB) camera that attached to the robot. DQN consists of a deep neural network called convolutional neural network (CNN), and a reinforcement learning algorithm named Q-Learning. It can make decisions with visual data as input, using experiences from consequences of trial-and-error attempts. Our DQN agents are trained in the simulation environments provided by a platform based on a First-Person Shooter (FPS) game named ViZDoom. Simulation is implemented for training to avoid any possible damage on the real robot during trial-and-error process. Perspective from the simulation is the same as if a camera is attached to the front of the mobile robot. There are many differences between the simulation and the real world. We applied a markerbased Augmented Reality (AR) algorithm to reduce differences between the simulation and the world by altering visual data from the camera with resources from the simulation.The second system is assigned the task of simple navigation to the robot, in which the starting location is fixed but the goal location is random in the designated zone. The robot must be able to detect and track the goal object using a USB camera as its only sensor. Once started, the robot must move from its starting location to the designated goal object. Our DQN navigation method is tested in the simulation and on the real robot. Performances of our DQN are measured quantitatively via average total scores and the number of success navigation attempts. The results show that our DQN can effectively guide a mobile robot to the goal object of the simple navigation tasks, for both the simulation and the real world.The third system employs a Transfer Learning (TL) strategy to reduce training time and resources required for the training of newly added tasks of DQN agents. The new task is the task of reaching the goal while also avoiding obstacles at the same time. Additionally, the starting and the goal locations are all random within the specified areas. The employed transfer learning strategy uses the whole network of the DQN agent trained for the first simple navigation task as the base for training the DQN agent for the second task. The training in our TL strategy decrease the exploration factor, which cause the agent to rely on the existing knowledge from the base network more than randomly selecting actions during the training. It results in the decreased training time, in which optimal solutions can be found faster than training from scratch.We evaluate performances of our TL strategy by comparing the DQN agents trained with our TL at different exploration factor values and the DQN agent trained from scratch. Additionally, agents trained from our TL are trained with the decreased number of episodes to extensively display performances of our TL agents. All DQN agents for the second navigation task are tested in the simulation to avoid any possible and uncontrollable damages from the obstacles. Performances are measured through success attempts and average total scores, same as in the first navigation task. Results show that DQN agents trained via the TL strategy can greatly outperform the agent trained from scratch, despite the lower number of training episodes.博士(工学)法政大学 (Hosei University

    Bowdoin Orient v.138, no.1-25 (2008-2009)

    Get PDF
    https://digitalcommons.bowdoin.edu/bowdoinorient-2000s/1009/thumbnail.jp
    corecore