10 research outputs found

    Enhancing FastSLAM 2.0 performance using a DE Algorithm with Multi-mutation Strategies

    Get PDF
    FastSLAM 2.0 is considered one of the popular approaches that utilizes a Rao-Blackwellized particle filter for solving simultaneous localization and mapping (SLAM) problems. It is computationally efficient, robust and can be used to handle large and complex environments. However, the conventional FastSLAM 2.0 algorithm is known to degenerate over time in terms of accuracy because of the particle depletion problem that arises in the resampling phase. In this work, we introduce an enhanced variant of the FastSLAM 2.0 algorithm based on an enhanced differential evolution (DE) algorithm with multi-mutation strategies to improve its performance and reduce the effect of the particle depletion problem. The Enhanced DE algorithm is used to optimize the particle weights and conserve diversity among particles. A comparison has been made with other two common algorithms to evaluate the performance of the proposed algorithm in estimating the robot and landmarks positions for a SLAM problem. Results are accomplished in terms of accuracy represented by the positioning errors of robot and landmark positions as well as their root mean square errors. All results show that the proposed algorithm is more accurate than the other compared algorithms in estimating the robot and landmark positions for all the considered cases. It can reduce the effect of the particle depletion problem and improve the performance of the FastSLAM 2.0 algorithm in solving SLAM problem

    Design and optimization of wireless sensor networks for localization and tracking

    Get PDF
    Knowledge of the position of nodes in a WSN is crucial in most wireless sensor network (WSN) applications. The gathered information needs to be associated with a particular location in a specific time instant in order to appropiately control de surveillance area. Moreover, WSNs may be used for tracking certain objects in monitoring applications, which also requires the incorporation of location information of the sensor nodes into the tracking algorithms. These requisites make localizacion and tracking two of the most important tasks of WSN. Despite of the large research efforts that have been made in this field, considerable technical challenges continue existing in subjects areas like data processing or communications. This thesis is mainly concerned with some of these technical problems. Specifically, we study three different challenges: sensor deployment, model independent localization and sensor selection. The first part of the work is focused on the task of sensor deployement. This is considered critical since it affects cost, detection, and localization accuracy of a WSN. There have been significant research efforts on deploying sensors from different points of view, e.g. connectivity or target detection. However, in the context of target localization, we believe it is more convenient to deploy the sensors in views of obtaining the best estimation possible on the target positioning. Therefore, in this work we suggest an analysis of the deployment from the standpoint of the error in the position estimation. To this end, we suggest the application of the modified Cram´er-Rao bound (MCRB) in a sensor network to perform a prior analysis of the system operation in the localization task. This analysis provides knowledge about the system behavior without a complete deployment. It also provides essential information to select fundamental parameters properly, like the number of sensors. To do so, a complete formulation of the modified information matrix (MFIM) and MCRB is developed for the most common measurement models, such as received signal strength (RSS), time-of-arrival (ToA) and angle-of-arrival (AoA). In addition, this formulation is extended for heterogeneous models that combine different measurement models. Simulation results demonstrate the utility of the proposed analysis and point out the similarity between MCRB and CRB. Secondly, we address the problem of target localization which encompasses many of the challenging issues which commonly arise in WSN. Consequently, many localization algorithms have been proposed in the literature each one oriented towards solving these issues. Nevertheless, it have seen tahta the localization performance of above methods usually relies heavily on the availability of accurate knowledge regarding the observation model. When errors in the measurement model are present, their target localization accuracy is degraded significantly. To overcome this problem, we proposed a novel localization algorithm to be used in applications where the measurement model is not accurate or incomplete. The independence of the algorithm from the model provides robustness and versatility. In order to do so, we apply radial basis functions (RBFs) interpolation to evaluate the measurement function in the entire surveillance area, and estimate the target position. In addition, we also propose the application of LASSO regression to compute the weigths of the RBFs and improve the generalization of the interpolated function. Simulation results have demonstrated the good performance of the proposed algorithm in the localization of single or multiples targets. Finally, we study the sensor selection problem. In order to prolong the network lifetime, sensors alternate their state between active and idle. The decision of which sensor should be activated is based on a variety of factors depending on the algorithm or the sensor application. Therefore, here we investigate the centralized selection of sensors in target-tracking applications over huge networks where a large number of randomly placed sensors are available for taking measurements. Specifically, we focus on the application of optimization algorithms for the selection of sensors using a variant of the CRB, the Posterior CRB (PCRB), as the performance-based optimization criteria. This bound provides the performance limit on the mean square error (MSE) for any unbiased estimator of a random parameter, and is iteratively computed by a particle filter (in our case, by a Rao-Blackwellized Particle Filter). In this work we analyze, and compare, three optimization algorithms: a genetic algorithm (GA), the particle swarm optimization (PSO), and a new discrete-variant of the cuckoo search (CS) algorithm. In addition, we propose a local-search versions of the previous optimization algorithms that provide a significant reduction of the computation time. Lastly, simulation results demonstrate the utility of these optmization algorithm to solve a sensor selection problem and point out the reduction of the computation time when local search is applied. ---------------------------------------------------Las redes de sensores se presentan como una tecnología muy interesante que ha atraído considerable interés por parte de los investigadores en la actualidad [1, 109]. Recientes avances en electrónica y en comunicaciones inalámbricas han permitido de desarrollo de sensores de bajo coste, baja potencia y multiples funciones, de reducido tamaño y con capacidades de comunicación a cortas distancias. Estos sensores, desplegados en gran número y unidos a través de comunicaciones inalámbricas, proporcionan grandes oportunidades en aplicaciones como la monitorización y el control de casas, ciudades o el medio ambiente. Un nodo sensor es un dispositivo de baja potencia capaz de interactuar con el medio a través de sus sensores, procesar información localmente y comunicar dicha información a tus vecinos más próximos. En el mercado existe una gran variedad de sensores (magnéticos, acústicos, térmicos, etc), lo que permite monitorizar muy diversas condiciones ambientales (temperatura, humedad, etc.) [25]. En consecuencia, las redes de sensores presentan un amplio rango de aplicaciones: seguridad en el hogar, monitorización del medio, análisis y predicción de condiciones climáticas, biomedicina [79], etc. A diferencia de las redes convencionales, las redes de sensores sus propias limitaciones, como la cantidad de energía disponible, el corto alcance de sus comunicaciones, su bajo ancho de band y sus limitaciones en el procesado de información y el almacenamiento de la misma. Por otro parte, existen limitaciones en el diseño que dependerán directamente de la aplicación que se le quiera dar a la red, como por ejemplo el tamaño de la red, el esquema de despliegue o la topología de la red..........Presidente: Jesús Cid Sueiro; Vocal: Mónica F. Bugallo; Secretario: Sancho Salcedo San

    Prediction of tunnel boring machine penetration rate using ant colony optimization, bee colony optimization and the particle swarm optimization, case study: Sabzkooh water conveyance tunnel

    Get PDF
    Purpose. The purpose of this study is to use a novel approach to estimate the tunnel boring machine (TBM) penetration rate in diverse ground conditions. Methods. The methods used in this study include ant colony optimization (ACO), bee colony optimization (BCO) and the particle swarm optimization (PSO). Moreover, a comprehensive database was created based on machine performance using penetration rate (m/h) as an output parameter – as well as intact rock and rock mass parameters including uniaxial compressive strength (UCS) (MPa), Brazilian tensile strength (BTS) (MPa), rock quality designation (RQD) (%), cohesion (MPa), elasticity modulus (GPa), Poisson’s ratio, density(g/cm3), joint angle (deg.) and joint spacing (m) as input parameters. Findings. Results showed that the analyses yielded several realistic and reliable models for predicting penetration rate of TBMs. ACO model has R2 = 0.8830 and RMSE = 0.6955, BCO model has R2 = 0.9367 and RMSE = 0.5113 and PSO model has R2 = 0.9717 and RMSE = 0.3418. Originality. Prediction of TBM penetration rate using these methods has been carried out in the Sabzkooh water conveyance tunnel for the first time. Practical implications. According to the results, all three approaches are very effective but PSO yields more precise and realistic findings than other methods.Мета. Створення нового підходу до прогнозу та оцінки швидкості проходки тунелепрохідного комбайна (ТПК) в умовах різних порід на основі використання специфічних методів оптимізації. Методика. Методи дослідження включають мурашиний алгоритм оптимізації (МАО), бджолиний алгоритм оптимізації (БАО) і оптимізацію методом рою частинок (ОМРЧ). Створена велика база даних з продуктивності комбайна із використанням швидкості проходки в якості вихідного параметра, а також таких вхідних параметрів непорушеної породи і породного масиву, як міцність на стиск (ПС) (МПа), міцність на розтяг, яка вимірюється “бразильським тестом” (БПР) (МПа), показник якості ґрунту (ПКГ) (%), зв’язність ґрунту (МПа), модуль пружності (ГПа), коефіцієнт Пуассона, щільність (г/см3), кут стикування (град.) та відстань між швами (м). Результати. Виконано аналіз в ході дослідження, який дозволив створити кілька надійних реалістичних моделей для прогнозування швидкості проходки ТПК. Модель, побудована на МАО, має коефіцієнт детермінації R2 = 0.88 і корінь із середнього квадрата відхилення RMSE = 0.69, БАО-моделі – R2 = 0.93 і RMSE = 0.51, та ОМРЧ-моделі – R2 = 0.97 і RMSE = 0.34. Наукова новизна. Вперше прогнозувалася швидкість проходки ТПК із використанням вищеописаних методів на прикладі водостічного тунелю Sabzkooh. Практична значимість. Результати дослідження є цінними для проектування в тонелебудівництві. Запропоновані нові підходи виявилися вельми ефективними, проте ОМРЧ алгоритм дозволяє отримати більш точні і реалістичні дані.Цель. Создание нового подхода к прогнозу и оценке скорости проходки туннелепроходческого комбайна (ТПК) в условиях различных пород на основе использования специфических методов оптимизации. Методика. Методы исследования включают муравьиный алгоритм оптимизации (МАО), пчелиный алгоритм оптимизации (ПАО) и оптимизацию методом роя частиц (ОМРЧ). Создана обширная база данных по производительности комбайна с использованием скорости проходки в качестве выходного параметра, а также таких входных параметров ненарушенной породы и породного массива, как прочность на сжатие (ПС) (МПа), прочность на растяжение, измеряемая “бразильским тестом” (БПР) (МПа), показатель качества грунта (ПКГ) (%), связность грунта (МПа), модуль упругости (ГПа), коэффициент Пуассона, плотность (г/см3), угол стыковки (град.) и расстояние между швами (м). Результаты. Выполнен анализ в ходе исследования, который позволил создать несколько надежных реалистичных моделей для прогнозирования скорости проходки ТПК. Модель, построенная на МАО, имеет коэффициент детерминации R2 = 0.88 и корень из среднего квадрата отклонения RMSE = 0.69, ПАО-модели – R2 = 0.93 и RMSE = 0.51, и ОМРЧ-модели – R2 = 0.97 и RMSE = 0.34. Научная новизна. Впервые прогнозировалась скорость проходки ТПК с использованием вышеописанных методов на примере водосточного туннеля Sabzkooh. Практическая значимость. Результаты исследования являются ценными для проектирования в тоннелестроении. Предложенные новые подходы оказались весьма эффективны, однако ОМРЧ алгоритм позволяет получить более точные и реалистичные данные.The authors would like to thank the anonymous reviewers and editor for their valuable comments

    Fight sample degeneracy and impoverishment in particle filters: A review of intelligent approaches

    Get PDF
    During the last two decades there has been a growing interest in Particle Filtering (PF). However, PF suffers from two long-standing problems that are referred to as sample degeneracy and impoverishment. We are investigating methods that are particularly efficient at Particle Distribution Optimization (PDO) to fight sample degeneracy and impoverishment, with an emphasis on intelligence choices. These methods benefit from such methods as Markov Chain Monte Carlo methods, Mean-shift algorithms, artificial intelligence algorithms (e.g., Particle Swarm Optimization, Genetic Algorithm and Ant Colony Optimization), machine learning approaches (e.g., clustering, splitting and merging) and their hybrids, forming a coherent standpoint to enhance the particle filter. The working mechanism, interrelationship, pros and cons of these approaches are provided. In addition, approaches that are effective for dealing with high-dimensionality are reviewed. While improving the filter performance in terms of accuracy, robustness and convergence, it is noted that advanced techniques employed in PF often causes additional computational requirement that will in turn sacrifice improvement obtained in real life filtering. This fact, hidden in pure simulations, deserves the attention of the users and designers of new filters

    Robot Localization with Weak Maps

    Get PDF
    In this work, we present an approach for indoor localization for a mobile robot based on a weakly-defined prior map. The aim is to estimate the robot's pose where even an incomplete knowledge of the environment is available, and furthermore, improving the information in the prior map according to measurements. We discuss two different approaches to describe the prior map. In the first approach, a complete map of the environment is given to the robot, but the scale of the map is unknown. The map is represented by occupancy grid mapping. We present a method based on Monte Carlo localization that successfully estimates the robot's pose and the scale of the map. In the second approach, the prior map is a 2D sketched map provided by a user, and it does not hold exact metric information of the building. Moreover, some obstacles and features are not fully presented. The aim is to estimate the scale of the map and to modify and correct the prior map knowing the robot's exact pose. The map is represented in the polygonal format in the homogeneous coordinates, and is capable of analyzing the uncertainty of features. We propose two methods to update prior information in the map. One uses a Kalman filter, and the other is based on Geometrical Constraints. Both methods can partially improve the estimate of the dimensions of rooms and locations and the orientation of walls, but they slightly suffer from data association

    Inferring Human Pose and Motion from Images

    No full text
    As optical gesture recognition technology advances, touchless human computer interfaces of the future will soon become a reality. One particular technology, markerless motion capture, has gained a large amount of attention, with widespread application in diverse disciplines, including medical science, sports analysis, advanced user interfaces, and virtual arts. However, the complexity of human anatomy makes markerless motion capture a non-trivial problem: I) parameterised pose configuration exhibits high dimensionality, and II) there is considerable ambiguity in surjective inverse mapping from observation to pose configuration spaces with a limited number of camera views. These factors together lead to multimodality in high dimensional space, making markerless motion capture an ill-posed problem. This study challenges these difficulties by introducing a new framework. It begins with automatically modelling specific subject template models and calibrating posture at the initial stage. Subsequent tracking is accomplished by embedding naturally-inspired global optimisation into the sequential Bayesian filtering framework. Tracking is enhanced by several robust evaluation improvements. Sparsity of images is managed by compressive evaluation, further accelerating computational efficiency in high dimensional space

    Who wrote this scientific text?

    No full text
    The IEEE bibliographic database contains a number of proven duplications with indication of the original paper(s) copied. This corpus is used to test a method for the detection of hidden intertextuality (commonly named "plagiarism"). The intertextual distance, combined with the sliding window and with various classification techniques, identifies these duplications with a very low risk of error. These experiments also show that several factors blur the identity of the scientific author, including variable group authorship and the high levels of intertextuality accepted, and sometimes desired, in scientific papers on the same topic

    Real-coded genetic algorithm particle filters for high-dimensional state spaces

    Get PDF
    This thesis successfully addresses the issues faced by particle filters in high-dimensional state-spaces by comparing them with genetic algorithms and then using genetic algorithm theory to address these issues. Sequential Monte Carlo methods are a class of online posterior density estimation algorithms that are suitable for non-Gaussian and nonlinear environments, however they are known to suffer from particle degeneracy; where the sample of particles becomes too sparse to approximate the posterior accurately. Various techniques have been proposed to address this issue but these techniques fail in high-dimensions. In this thesis, after a careful comparison between genetic algorithms and particle filters, we posit that genetic algorithm theoretic arguments can be used to explain the working of particle filters. Analysing the working of a particle filter, we note that it is designed similar to a genetic algorithm but does not include recombination. We argue based on the building-block hypothesis that the addition of a recombination operator would be able to address the sample impoverishment phenomenon in higher dimensions. We propose a novel real-coded genetic algorithm particle filter (RGAPF) based on these observations and test our hypothesis on the stochastic volatility estimation of financial stocks. The RGAPF successfully scales to higher-dimensions. To further strengthen our argument that whether building-block-hypothesis-like effects are due to the recombination operator, we compare the RGAPF with a mutation-only particle filter with an adjustable mutation rate that is set to equal the population-to-population variance of the RGAPF. The latter significantly and consistently performs better, indicating that recombination is having a subtle and significant effect that may be theoretically explained by genetic algorithm theory. After two successful attempts at validating our hypothesis we compare the performance of the RGAPF using different real-recombination operators. Observing the behaviour of the RGAPF under these recombination operators we propose a mean-centric recombination operator specifically for high-dimensional particle filtering. This recombination operator is successfully tested and compared with benchmark particle filters and a hybrid CMA-ES particle filter using simulated data and finally on real end-of-day data of the securities making up the FTSE-100 index. Each experiment is discussed in detail and we conclude with a brief description of the future direction of research

    Robotika kolaboratiboa nabigazio autonomoarekin bihurketa prozesuak egiteko entengabeko lanetan

    Get PDF
    Capítulo 6.2 confidencial . -- Tesis completa 190 p. -- Tesis censurada 165 p.Proiektu honek bi prototipo ezberdin jasotzen ditu. Alde batetik, nabigazio autonomoa erabiltzen duen AMR prototipo baten garapena erakutsiko da. Bestetik, Mercedes ¿ Benz barnean landuriko tresneria baten ikerketa aurkeztuko da. AMR-ak plataforma mugikor ahaltsuak dira eta hauek barneko nabigazio autonomoa erabiltzen dute, edozein gune ezagunetik mugiarazteko. Horregatik, Gasteizko Ingeniaritza Eskolak halako plataforma baten diseinua burutzen hasi da, lokalizazio algoritmoak lantzeko. Robot mugikor honek elementu industrialak erabiliko ditu eta hauek inteligentzia garapenean zenbait oztopo ezarriko ditu. AMR - ri robot bat atxiki ahal zaio, horregatik Mercedes ¿ Benz barnean elementu komertzialekin AMR bateri robot kolaboratibo bat ezarri zaio. Garapen honek lan postuen efizientziak lantzeko baliagarria izango da eta horretarako robotak etengabeko lanetan mugiarazi, kokatu eta kalitatezko lana burutu behar du
    corecore