1,056 research outputs found

    Multi-Sensor Fusion for Underwater Vehicle Localization by Augmentation of RBF Neural Network and Error-State Kalman Filter

    Get PDF
    The Kalman filter variants extended Kalman filter (EKF) and error-state Kalman filter (ESKF) are widely used in underwater multi-sensor fusion applications for localization and navigation. Since these filters are designed by employing first-order Taylor series approximation in the error covariance matrix, they result in a decrease in estimation accuracy under high nonlinearity. In order to address this problem, we proposed a novel multi-sensor fusion algorithm for underwater vehicle localization that improves state estimation by augmentation of the radial basis function (RBF) neural network with ESKF. In the proposed algorithm, the RBF neural network is utilized to compensate the lack of ESKF performance by improving the innovation error term. The weights and centers of the RBF neural network are designed by minimizing the estimation mean square error (MSE) using the steepest descent optimization approach. To test the performance, the proposed RBF-augmented ESKF multi-sensor fusion was compared with the conventional ESKF under three different realistic scenarios using Monte Carlo simulations. We found that our proposed method provides better navigation and localization results despite high nonlinearity, modeling uncertainty, and external disturbances.This research was partially funded by the Campus de Excelencia Internacional Andalucia Tech, University of Malaga, Malaga, Spain. Partial funding for open access charge: Universidad de Málag

    Active Learning of Gaussian Processes for Spatial Functions in Mobile Sensor Networks

    Get PDF
    This paper proposes a spatial function modeling approach using mobile sensor networks, which potentially can be used for environmental surveillance applications. The mobile sensor nodes are able to sample the point observations of an 2D spatial function. On the one hand, they will use the observations to generate a predictive model of the spatial function. On the other hand, they will make collective motion decisions to move into the regions where high uncertainties of the predictive model exist. In the end, an accurate predictive model is obtained in the sensor network and all the mobile sensor nodes are distributed in the environment with an optimized pattern. Gaussian process regression is selected as the modeling technique in the proposed approach. The hyperparameters of Gaussian process model are learned online to improve the accuracy of the predictive model. The collective motion control of mobile sensor nodes is based on a locational optimization algorithm, which utilizes an information entropy of the predicted Gaussian process to explore the environment and reduce the uncertainty of predictive model. Simulation results are provided to show the performance of the proposed approach. © 2011 IFAC

    Online Visual Robot Tracking and Identification using Deep LSTM Networks

    Full text link
    Collaborative robots working on a common task are necessary for many applications. One of the challenges for achieving collaboration in a team of robots is mutual tracking and identification. We present a novel pipeline for online visionbased detection, tracking and identification of robots with a known and identical appearance. Our method runs in realtime on the limited hardware of the observer robot. Unlike previous works addressing robot tracking and identification, we use a data-driven approach based on recurrent neural networks to learn relations between sequential inputs and outputs. We formulate the data association problem as multiple classification problems. A deep LSTM network was trained on a simulated dataset and fine-tuned on small set of real data. Experiments on two challenging datasets, one synthetic and one real, which include long-term occlusions, show promising results.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017. IROS RoboCup Best Paper Awar

    UWB-INS Fusion Positioning Based on a Two-Stage Optimization Algorithm

    Get PDF
    Ultra-wideband (UWB) is a carrier-less communication technology that transmits data using narrow pulses of non-sine waves on the nanosecond scale. The UWB positioning system uses the multi-lateral positioning algorithm to accurately locate the target, and the positioning accuracy is seriously affected by the non-line-of-sight (NLOS) error. The existing non-line-of-sight error compensation methods lack multidimensional consideration. To combine the advantages of various methods, a two-stage UWB-INS fusion localization algorithm is proposed. In the first stage, an NLOS signal filter is designed based on support vector machines (SVM). In the second stage, the results of UWB and Inertial Navigation System (INS) are fused based on Kalman filter algorithm. The two-stage fusion localization algorithm achieves a great improvement on positioning system, it can improve the localization accuracy by 79.8% in the NLOS environment and by 36% in the (line-of-sight) LOS environment

    Design and optimization of wireless sensor networks for localization and tracking

    Get PDF
    Knowledge of the position of nodes in a WSN is crucial in most wireless sensor network (WSN) applications. The gathered information needs to be associated with a particular location in a specific time instant in order to appropiately control de surveillance area. Moreover, WSNs may be used for tracking certain objects in monitoring applications, which also requires the incorporation of location information of the sensor nodes into the tracking algorithms. These requisites make localizacion and tracking two of the most important tasks of WSN. Despite of the large research efforts that have been made in this field, considerable technical challenges continue existing in subjects areas like data processing or communications. This thesis is mainly concerned with some of these technical problems. Specifically, we study three different challenges: sensor deployment, model independent localization and sensor selection. The first part of the work is focused on the task of sensor deployement. This is considered critical since it affects cost, detection, and localization accuracy of a WSN. There have been significant research efforts on deploying sensors from different points of view, e.g. connectivity or target detection. However, in the context of target localization, we believe it is more convenient to deploy the sensors in views of obtaining the best estimation possible on the target positioning. Therefore, in this work we suggest an analysis of the deployment from the standpoint of the error in the position estimation. To this end, we suggest the application of the modified Cram´er-Rao bound (MCRB) in a sensor network to perform a prior analysis of the system operation in the localization task. This analysis provides knowledge about the system behavior without a complete deployment. It also provides essential information to select fundamental parameters properly, like the number of sensors. To do so, a complete formulation of the modified information matrix (MFIM) and MCRB is developed for the most common measurement models, such as received signal strength (RSS), time-of-arrival (ToA) and angle-of-arrival (AoA). In addition, this formulation is extended for heterogeneous models that combine different measurement models. Simulation results demonstrate the utility of the proposed analysis and point out the similarity between MCRB and CRB. Secondly, we address the problem of target localization which encompasses many of the challenging issues which commonly arise in WSN. Consequently, many localization algorithms have been proposed in the literature each one oriented towards solving these issues. Nevertheless, it have seen tahta the localization performance of above methods usually relies heavily on the availability of accurate knowledge regarding the observation model. When errors in the measurement model are present, their target localization accuracy is degraded significantly. To overcome this problem, we proposed a novel localization algorithm to be used in applications where the measurement model is not accurate or incomplete. The independence of the algorithm from the model provides robustness and versatility. In order to do so, we apply radial basis functions (RBFs) interpolation to evaluate the measurement function in the entire surveillance area, and estimate the target position. In addition, we also propose the application of LASSO regression to compute the weigths of the RBFs and improve the generalization of the interpolated function. Simulation results have demonstrated the good performance of the proposed algorithm in the localization of single or multiples targets. Finally, we study the sensor selection problem. In order to prolong the network lifetime, sensors alternate their state between active and idle. The decision of which sensor should be activated is based on a variety of factors depending on the algorithm or the sensor application. Therefore, here we investigate the centralized selection of sensors in target-tracking applications over huge networks where a large number of randomly placed sensors are available for taking measurements. Specifically, we focus on the application of optimization algorithms for the selection of sensors using a variant of the CRB, the Posterior CRB (PCRB), as the performance-based optimization criteria. This bound provides the performance limit on the mean square error (MSE) for any unbiased estimator of a random parameter, and is iteratively computed by a particle filter (in our case, by a Rao-Blackwellized Particle Filter). In this work we analyze, and compare, three optimization algorithms: a genetic algorithm (GA), the particle swarm optimization (PSO), and a new discrete-variant of the cuckoo search (CS) algorithm. In addition, we propose a local-search versions of the previous optimization algorithms that provide a significant reduction of the computation time. Lastly, simulation results demonstrate the utility of these optmization algorithm to solve a sensor selection problem and point out the reduction of the computation time when local search is applied. ---------------------------------------------------Las redes de sensores se presentan como una tecnología muy interesante que ha atraído considerable interés por parte de los investigadores en la actualidad [1, 109]. Recientes avances en electrónica y en comunicaciones inalámbricas han permitido de desarrollo de sensores de bajo coste, baja potencia y multiples funciones, de reducido tamaño y con capacidades de comunicación a cortas distancias. Estos sensores, desplegados en gran número y unidos a través de comunicaciones inalámbricas, proporcionan grandes oportunidades en aplicaciones como la monitorización y el control de casas, ciudades o el medio ambiente. Un nodo sensor es un dispositivo de baja potencia capaz de interactuar con el medio a través de sus sensores, procesar información localmente y comunicar dicha información a tus vecinos más próximos. En el mercado existe una gran variedad de sensores (magnéticos, acústicos, térmicos, etc), lo que permite monitorizar muy diversas condiciones ambientales (temperatura, humedad, etc.) [25]. En consecuencia, las redes de sensores presentan un amplio rango de aplicaciones: seguridad en el hogar, monitorización del medio, análisis y predicción de condiciones climáticas, biomedicina [79], etc. A diferencia de las redes convencionales, las redes de sensores sus propias limitaciones, como la cantidad de energía disponible, el corto alcance de sus comunicaciones, su bajo ancho de band y sus limitaciones en el procesado de información y el almacenamiento de la misma. Por otro parte, existen limitaciones en el diseño que dependerán directamente de la aplicación que se le quiera dar a la red, como por ejemplo el tamaño de la red, el esquema de despliegue o la topología de la red..........Presidente: Jesús Cid Sueiro; Vocal: Mónica F. Bugallo; Secretario: Sancho Salcedo San

    Automatic classification of power quality disturbances using optimal feature selection based algorithm

    Get PDF
    The development of renewable energy sources and power electronic converters in conventional power systems leads to Power Quality (PQ) disturbances. This research aims at automatic detection and classification of single and multiple PQ disturbances using a novel optimal feature selection based on Discrete Wavelet Transform (DWT) and Artificial Neural Network (ANN). DWT is used for the extraction of useful features, which are used to distinguish among different PQ disturbances by an ANN classifier. The performance of the classifier solely depends on the feature vector used for the training. Therefore, this research is required for the constructive feature selection based classification system. In this study, an Artificial Bee Colony based Probabilistic Neural Network (ABCPNN) algorithm has been proposed for optimal feature selection. The most common types of single PQ disturbances include sag, swell, interruption, harmonics, oscillatory and impulsive transients, flicker, notch and spikes. Moreover, multiple disturbances consisting of combination of two disturbances are also considered. The DWT with multi-resolution analysis has been applied to decompose the PQ disturbance waveforms into detail and approximation coefficients at level eight using Daubechies wavelet family. Various types of statistical parameters of all the detail and approximation coefficients have been analysed for feature extraction, out of which the optimal features have been selected using ABC algorithm. The performance of the proposed algorithm has been analysed with different architectures of ANN such as multilayer perceptron and radial basis function neural network. The PNN has been found to be the most suitable classifier. The proposed algorithm is tested for both PQ disturbances obtained from the parametric equations and typical power distribution system models using MATLAB/Simulink and PSCAD/EMTDC. The PQ disturbances with uniformly distributed noise ranging from 20 to 50 dB have also been analysed. The experimental results show that the proposed ABC-PNN based approach is capable of efficiently eliminating unnecessary features to improve the accuracy and performance of the classifier

    Sequential Kernel Herding: Frank-Wolfe Optimization for Particle Filtering

    Get PDF
    Recently, the Frank-Wolfe optimization algorithm was suggested as a procedure to obtain adaptive quadrature rules for integrals of functions in a reproducing kernel Hilbert space (RKHS) with a potentially faster rate of convergence than Monte Carlo integration (and "kernel herding" was shown to be a special case of this procedure). In this paper, we propose to replace the random sampling step in a particle filter by Frank-Wolfe optimization. By optimizing the position of the particles, we can obtain better accuracy than random or quasi-Monte Carlo sampling. In applications where the evaluation of the emission probabilities is expensive (such as in robot localization), the additional computational cost to generate the particles through optimization can be justified. Experiments on standard synthetic examples as well as on a robot localization task indicate indeed an improvement of accuracy over random and quasi-Monte Carlo sampling.Comment: in 18th International Conference on Artificial Intelligence and Statistics (AISTATS), May 2015, San Diego, United States. 38, JMLR Workshop and Conference Proceeding

    Multilevel assimilation of inverted seismic data

    Get PDF
    I ensemble-basert data-assimilering (DA) er størrelsen på ensemblet vanligvis begrenset til hundre medlemmer. Rett frem bruk av ensemble-basert DA kan resultere i betydelig Monte Carlo-feil, som ofte viser seg som alvorlig undervurdering av parameterusikkerheter. Assimilering av store mengder samtidige data forsterker de negative effektene av Monte Carlo-feilen. Avstandsbasert lokalisering er det konvensjonelle middelet for å begrense dette problemet. Denne metoden har imidlertid sine egne ulemper. Den vil, f.eks., fjerne sanne korrelasjoner over lange distanser og det er svært vanskelig å benytte på data som ikke har en unik fysisk plassering. Bruk av modeller med lavere kvalitet reduserer beregningskostnadene per ensemble-medlem og gir derfor muligheten til å redusere Monte Carlo-feilen ved å øke ensemble-størrelsen. Men, modeller med lavere kvalitet øker også modelleringsfeilen. Data-assimilering på flere nivåer (MLDA) bruker et utvalg av modeller som danner hierarkier av både beregningskostnad og beregningsnøyaktighet, og prøver åå oppnå en bedre balanse mellom Monte Carlo-feil og modelleringsfeil. I dette PhD-prosjektet ble flere MLDA-algoritmer utviklet og deres kvalitet for assimilering av inverterte seismiske data ble vurdert på forenklede reservoarproblemer. Bruk av modeller på flere nivå innebærer introduksjon av noen numeriske feil (multilevel modeling error, MLME), i tillegg til de allerede eksisterende numeriske feilene. Flere beregningsmessig rimelige metoder ble utviklet for delvis å kompansere for MLME i gjennomføring av data-assimilering på flere nivåer. Metodene ble også undersøkt under historie tilpassing på forenklede reservoar problemer. Til slutt ble en av de nye MLDA-algoritmene valgt og ytelsen ble vurdert på et historie tilpassings problem med en realistisk reservoar modell.In ensemble-based data assimilation (DA), the ensemble size is usually limited to around one hundred. Straightforward application of ensemble-based DA can therefore result in significant Monte Carlo errors, often manifesting themselves as severe underestimation of parameter uncertainties. Assimilation of large amounts of simultaneous data enhances the negative effects of Monte Carlo errors. Distance-based localization is the conventional remedy for this problem. However, it has its own drawbacks, e.g. not allowing for true long-range correlations and difficulty in assimilation of data which do not have a specific physical location. Use of lower-fidelity models reduces the computational cost per ensemble member and therefore renders the possibility to reduce Monte Carlo errors by increasing the ensemble size, but it also adds to the modeling error. Multilevel data assimilation (MLDA) uses a selection of models forming hierarchies of both computational cost and computational accuracy, and tries to obtain a better balance between Monte Carlo errors and modeling errors. In this PhD project, several MLDA algorithms were developed and their quality for assimilation of inverted seismic data was assessed in simplistic reservoir problems. Utilization of multilevel models entails introduction of some numerical errors (multilevel modeling error, MLME) to the problem in addition to the already existing numerical errors. Several computationally inexpensive methods were devised for partially accounting for MLME in the context of multilevel data assimilation. They were also investigated in simplistic reservoir history-matching problems. Finally, one of the novel MLDA algorithms was chosen and its performance was assessed in a realistic reservoir history-matching problem.Doktorgradsavhandlin
    corecore