125,436 research outputs found
An enhanced classifier system for autonomous robot navigation in dynamic environments
In many cases, a real robot application requires the navigation in dynamic environments. The navigation problem involves two main tasks: to avoid obstacles and to reach a goal. Generally, this problem could be faced considering reactions and sequences of actions. For solving the navigation problem a complete controller, including actions and reactions, is needed. Machine learning techniques has been applied to learn these controllers. Classifier Systems (CS) have proven their ability of continuos learning in these domains. However, CS have some problems in reactive systems. In this paper, a modified CS is proposed to overcome these problems. Two special mechanisms are included in the developed CS to allow the learning of both reactions and sequences of actions. The learning process has been divided in two main tasks: first, the discrimination between a predefined set of rules and second, the discovery of new rules to obtain a successful operation in dynamic environments. Different experiments have been carried out using a mini-robot Khepera to find a generalised solution. The results show the ability of the system to continuous learning and adaptation to new situations.Publicad
Obstacle Avoidance Problem for Second Degree Nonholonomic Systems
In this paper, we propose a new control design scheme for solving the
obstacle avoidance problem for nonlinear driftless control-affine systems. The
class of systems under consideration satisfies controllability conditions with
iterated Lie brackets up to the second order. The time-varying control strategy
is defined explicitly in terms of the gradient of a potential function. It is
shown that the limit behavior of the closed-loop system is characterized by the
set of critical points of the potential function. The proposed control design
method can be used under rather general assumptions on potential functions, and
particular applications with navigation functions are illustrated by numerical
examples.Comment: This is the author's accepted version of the paper to appear in: 2018
IEEE Conference on Decision and Control (CDC), (c) IEE
Recommended from our members
Indoor Navigation System for the Visually Impaired with User-centric Graph Representation and Vision Detection Assistance
Independent navigation through unfamiliar indoor spaces is beset with barriers for the visually impaired. Hence, this issue impairs their independence, self-respect and self-reliance. In this thesis I will introduce a new indoor navigation system for the blind and visually impaired that is affordable for both the user and the building owners.
Outdoor vehicle navigation technical challenges have been solved using location information provided by Global Positioning Systems (GPS) and maps using Geographical Information Systems (GIS). However, GPS and GIS information is not available for indoor environments making indoor navigation, a challenging technical problem. Moreover, the indoor navigation system needs to be developed with the blind user in mind, i.e., special care needs to be given to vision free user interface.
In this project, I design and implement an indoor navigation application for the blind and visually impaired that uses RFID technology and Computer Vision for localization and a navigation map generated automatically based on environmental landmarks by simulating a user’s behavior. The focus of the indoor navigation system is no longer only on the indoor environment itself, but the way the blind users can experience it. This project will try this new idea in solving indoor navigation problems for blind and visually impaired users
Autonomous Navigation for an Unmanned Aerial Vehicle by the Decomposition Coordination Method
This paper introduces a new approach for solving the navigation problem of Unmanned Aerial Vehicles (UAV) by studying its rotational and translational dynamics and then solving the nonlinear model by the Decomposition Coordination method. The objective is to reach a destination goal by the mean of an autonomous computed  optimal path calculated  through optimal control sequence. Solving such complex systems often requires a great amount of computation. However, the approach considered herein is based on the Decomposition Coordination principle, which allows the nonlinearity to be treated at a local level, thus offering a low computing time. The stability of the method is discussed with sufficient conditions for convergence. A numerical application is given in consolidation the theoretical results
A tesselated probabilistic representation for spatial robot perception and navigation
The ability to recover robust spatial descriptions from sensory information and to efficiently utilize these descriptions in appropriate planning and problem-solving activities are crucial requirements for the development of more powerful robotic systems. Traditional approaches to sensor interpretation, with their emphasis on geometric models, are of limited use for autonomous mobile robots operating in and exploring unknown and unstructured environments. Here, researchers present a new approach to robot perception that addresses such scenarios using a probabilistic tesselated representation of spatial information called the Occupancy Grid. The Occupancy Grid is a multi-dimensional random field that maintains stochastic estimates of the occupancy state of each cell in the grid. The cell estimates are obtained by interpreting incoming range readings using probabilistic models that capture the uncertainty in the spatial information provided by the sensor. A Bayesian estimation procedure allows the incremental updating of the map using readings taken from several sensors over multiple points of view. An overview of the Occupancy Grid framework is given, and its application to a number of problems in mobile robot mapping and navigation are illustrated. It is argued that a number of robotic problem-solving activities can be performed directly on the Occupancy Grid representation. Some parallels are drawn between operations on Occupancy Grids and related image processing operations
Man-Computer Problem Solving in Real-Time Naval Duels
The development of a new Man-Computer Problem Solving Methodology to be widely and effectively applied by the Navy has been the objective of this Research Project. The basic hypothesis that has been examined is as follows. If an interactive system would be available by which a human problem solver could put together, easily and quickly, a simulation of the problem and quickly perform tests of various solutions, perform an evaluation and then further improve the solution, then large scale economies and improved effectiveness would result. The research reported here may be considered to having taken the empirical approach. An experimental environment was selected, namely a Naval War. An interactive problem solving computer system was designed for this environment. To obtain an indication of the effectiveness of the system required the solution of problems in human engineering, computational methods and strategy in the areas of tracking and navigation, sonar applications and processing, and weapon application. New real-time interactive systems were incorporated to simplify the evolution of new problem solving methodologies
Parallel processing and expert systems
Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited
Fusion-SLAM by combining RGB-D SLAM and Rat SLAM : a thesis presented in partial fulfilment of the requirements for the degree of Master of Engineering in Mechatronics at Massey University, Albany, New Zealand
Robotic Simultaneous Localization and Mapping (SLAM) is the problem of solving how to create a map of the environment while localizing the robot in the map being created. This presents a causality dilemma where the map needs to be created in order to localize the robot, but the robot also needs to be localized in order to create the map. In past research there have been many solutions to this problem ranging from Extended Kalman Filter (EKF) to Graph SLAM systems. There has also been extensive research in bioinspired methods, like ratSLAM implemented in aerial and land-based robots. The different research setups use sensors such as Time of Flight (ToF) e.g. laser scanners and passive devices e.g. cameras. Over the past few years a new type of combined apparatus has been developed by Microsoft called the Kinect. It combines active and passive sensing elements and aligns the data in a way which allows for efficient implementation in robotic systems. This has led to the Kinect being implemented in new research and many studies, mostly around RGB-D SLAM. However these methods generally require a continuous stream of images and become inaccurate when exposed to ambiguous environments.
This thesis presents the design and implementation of a fusion algorithm to solve the robotic SLAM problem. The study starts by analysing existing methods to determine what research has been done. It then proceeds to introduce the components used in this study and the Fusion Algorithm. The algorithm incorporates the colour and depth data extraction and manipulation methods used in the RGB-D SLAM system while also implementing a mapping step similar to the grid cell and firing field functions found in the ratSLAM. This method improves upon the RGB-D SLAM’s weakness of requiring a continuous stream and ambiguous images. An experiment is then conducted on the developed system to determine the extent to which it has solved the SLAM problem. Moreover, the success rate for finding a node in a cell and matching its pose is also investigated.
In conclusion, this research presents a novel algorithm for successfully solving the robotic SLAM problem. The proposed algorithm also helps improve the system’s efficiency in navigation, odometry error correction, and scan matching vulnerabilities in feature sparse views
- …