1,395 research outputs found

    Hybrid genetic algorithm and particle filter optimization model for simultaneous localization and mapping problems

    Get PDF
    Determining position of a robot and knowing position of the required objects on the map in unknown environments such as underwater, other planets and the remaining areas of natural disasters has led to the development of efficient algorithms for Simultaneous Localization and Mapping (SLAM). The current solutions for solving the SLAM have some drawbacks. For example, the solutions based on Extended Kalman Filter (EKF) are faced with limitation in non-linear models and non-Gaussian errors which are causes for decrease of accuracy. The solutions based on particle filter are also suffering from high memory complexity and time complexity. One of the major approaches to solve the SLAM problem is the approach based on Evolutionary Algorithm (EA). The main advantage of the EA is that it can be used in search space which is too large to be used with high convergence while its disadvantage is high time and computational complexity. This thesis proposes two optimization models in solving SLAM problem namely Hybrid Optimization Model (HOM) and Lined-Based Genetic Algorithm Optimization Model (LBGAOM). These models do not have the limitations of EKF, memory complexity of particle filter, and disadvantages of EA in search space. When the results of HOM compared with original EA, it showed an increase of accuracy based on presented fitness function. The best fitness in original EA was 16.36 but in HOM has reached to 16.68. Both models applied a proposed new representation model. The representation model is designed and used to represent the robot and its environment and is based on occupancy grid and genetic algorithm. There are two types of representation models proposed in this thesis namely Layer 1 and Layer 2. For each layer, related fitness function is created to evaluate the accuracy of map in the model that was tested with some different parameters. The proposed HOM is designed based on genetic algorithm and particle filter by creating a new mutation model inspired by particle filter. The search space is reduced and only suitable space will be explored based on proposed functions. The proposed LBGAOM is a new optimization model based on extraction line from laser sensor data to increase the speed. In this model, search space in the map is a set of lines instead of pixel by pixel and it makes searching time faster. The evaluation of the proposed representation model shows that Layer 2 has better fitness value than Layer 1. The HOM has better performance compared to original GA Layer 1. The LBGAOM has decreased the search space compared to pixel based model. In conclusion, the proposed optimization models have good performance in solving the SLAM problem in terms of speed and accuracy

    A Bioinspired Neural Model Based Extended Kalman Filter for Robot SLAM

    Get PDF
    Robot simultaneous localization and mapping (SLAM) problem is a very important and challenging issue in the robotic field. The main tasks of SLAM include how to reduce the localization error and the estimated error of the landmarks and improve the robustness and accuracy of the algorithms. The extended Kalman filter (EKF) based method is one of the most popular methods for SLAM. However, the accuracy of the EKF based SLAM algorithm will be reduced when the noise model is inaccurate. To solve this problem, a novel bioinspired neural model based SLAM approach is proposed in this paper. In the proposed approach, an adaptive EKF based SLAM structure is proposed, and a bioinspired neural model is used to adjust the weights of system noise and observation noise adaptively, which can guarantee the stability of the filter and the accuracy of the SLAM algorithm. The proposed approach can deal with the SLAM problem in various situations, for example, the noise is in abnormal conditions. Finally, some simulation experiments are carried out to validate and demonstrate the efficiency of the proposed approach

    Neuroeconomics: How Neuroscience Can Inform Economics

    Get PDF
    Neuroeconomics uses knowledge about brain mechanisms to inform economic analysis, and roots economics in biology. It opens up the "black box" of the brain, much as organizational economics adds detail to the theory of the firm. Neuroscientists use many tools— including brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity. The key insight for economics is that the brain is composed of multiple systems which interact. Controlled systems ("executive function") interrupt automatic ones. Emotions and cognition both guide decisions. Just as prices and allocations emerge from the interaction of two processes—supply and demand— individual decisions can be modeled as the result of two (or more) processes interacting. Indeed, "dual-process" models of this sort are better rooted in neuroscientific fact, and more empirically accurate, than single-process models (such as utility-maximization). We discuss how brain evidence complicates standard assumptions about basic preference, to include homeostasis and other kinds of state-dependence. We also discuss applications to intertemporal choice, risk and decision making, and game theory. Intertemporal choice appears to be domain-specific and heavily influenced by emotion. The simplified ß-d of quasi-hyperbolic discounting is supported by activation in distinct regions of limbic and cortical systems. In risky decision, imaging data tentatively support the idea that gains and losses are coded separately, and that ambiguity is distinct from risk, because it activates fear and discomfort regions. (Ironically, lesion patients who do not receive fear signals in prefrontal cortex are "rationally" neutral toward ambiguity.) Game theory studies show the effect of brain regions implicated in "theory of mind", correlates of strategic skill, and effects of hormones and other biological variables. Finally, economics can contribute to neuroscience because simple rational-choice models are useful for understanding highly-evolved behavior like motor actions that earn rewards, and Bayesian integration of sensorimotor information

    Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments

    Get PDF
    Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot’s coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization is addressed, when no information about the initial position is known, and the robot relies only on its sensors. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor. In this work, various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external information with motion information of the robot, odometry. Therefore, the system is capable of finding its own position in indoor environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. The implemented algorithms are Differential Evolution, Particle Swarm Optimization, and Invasive Weed Optimization, which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals, and the colonizing behavior of invasive species of plants respectively. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, with exploratory behavior and the convergence conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. By modeling the laser sensor as a probability distribution over the measured distance, the algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much can be evaluated. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filters and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále

    Navigational Path Analysis of Mobile Robot in Various Environments

    Get PDF
    This dissertation describes work in the area of an autonomous mobile robot. The objective is navigation of mobile robot in a real world dynamic environment avoiding structured and unstructured obstacles either they are static or dynamic. The shapes and position of obstacles are not known to robot prior to navigation. The mobile robot has sensory recognition of specific objects in the environments. This sensory-information provides local information of robots immediate surroundings to its controllers. The information is dealt intelligently by the robot to reach the global objective (the target). Navigational paths as well as time taken during navigation by the mobile robot can be expressed as an optimisation problem and thus can be analyzed and solved using AI techniques. The optimisation of path as well as time taken is based on the kinematic stability and the intelligence of the robot controller. A successful way of structuring the navigation task deals with the issues of individual behaviour design and action coordination of the behaviours. The navigation objective is addressed using fuzzy logic, neural network, adaptive neuro-fuzzy inference system and different other AI technique.The research also addresses distributed autonomous systems using multiple robot
    corecore