1,038 research outputs found

    Distributed Maximum Likelihood Sensor Network Localization

    Full text link
    We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks

    Matrix inversion speed up with CUDA

    Get PDF
    English: In this project several mathematic algorithms are developed to obtain a matrix inversion method - that combines CUDA's parallel architecture and MATLAB which is actually faster than MATLAB's built in inverse matrix function. This matrix inversion method is intended to be used for image reconstruction as a faster alternative to iterative methods with a comparable quality. The algorithms developed in this project are Gauss-Jordan elimination, Cholesky decomposition, Gaussian elimination and matrix multiplication. Gauss-Seidel is also featured in the report, but only as an alternative method of finding the inverse, since it has not been developed in the project.Castellano: En este proyecto varios algoritmos matemáticos se han desarrollado para obtener un método de inversión matrices que combine la arquitectura CUDA en paralelo y MATLAB, con el objetivo de ser más rápido que utilizar sólo MATLAB. Los métodos desarrollados están destinado a ser utilizados para la reconstrucción de imagen como una alternativa más rápida que los métodos iterativos con una calidad comparable. Los algoritmos desarrollados en este proyecto son la eliminación de Gauss-Jordan, la descomposición de Cholesky, la eliminación de Gauss y la multiplicación de matrices. Gauss-Seidel también se incluye, pero sólo como un método alternativo para encontrar la inversa, ya que no se ha desarrollado en el proyecto.Català: En aquest projecte diversos algorismes matemàtics s'han desenvolupat per obtenir un mètode d'inversió matrius que combini l'arquitectura CUDA en paral.lel i MATLAB, amb l'objectiu de ser més ràpid que utilitzar només MATLAB. Els mètodes desenvolupats estan destinat a ser utilitzats per a la reconstrucció d'imatge com una alternativa més ràpida que els mètodes iteratius amb una qualitat comparable. Els algorismes desenvolupats en aquest projecte són l'eliminació de Gauss-Jordan, la descomposició de Cholesky, l'eliminació de Gauss i la multiplicació de matrius. Gauss-Seidel també s'inclou, però només com un mètode alternatiu per trobar la inversa, ja que no s'ha desenvolupat en el projecte

    Best Linear Unbiased Estimation Fusion with Constraints

    Get PDF
    Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple data sets for the purpose of estimating an unknown quantity — a parameter or a process. Estimation fusion with constraints gives rise to challenging theoretical problems given the observations from multiple geometrically dispersed sensors: Under dimensionality constraints, how to preprocess data at each local sensor to achieve the best estimation accuracy at the fusion center? Under communication bandwidth constraints, how to quantize local sensor data to minimize the estimation error at the fusion center? Under constraints on storage, how to optimally update state estimates at the fusion center with out-of-sequence measurements? Under constraints on storage, how to apply the out-of-sequence measurements (OOSM) update algorithm to multi-sensor multi-target tracking in clutter? The present work is devoted to the above topics by applying the best linear unbiased estimation (BLUE) fusion. We propose optimal data compression by reducing sensor data from a higher dimension to a lower dimension with minimal or no performance loss at the fusion center. For single-sensor and some particular multiple-sensor systems, we obtain the explicit optimal compression rule. For a multisensor system with a general dimensionality requirement, we propose the Gauss-Seidel iterative algorithm to search for the optimal compression rule. Another way to accomplish sensor data compression is to find an optimal sensor quantizer. Using BLUE fusion rules, we develop optimal sensor data quantization schemes according to the bit rate constraints in communication between each sensor and the fusion center. For a dynamic system, how to perform the state estimation and sensor quantization update simultaneously is also established, along with a closed form of a recursion for a linear system with additive white Gaussian noise. A globally optimal OOSM update algorithm and a constrained optimal update algorithm are derived to solve one-lag as well as multi-lag OOSM update problems. In order to extend the OOSM update algorithms to multisensor multitarget tracking in clutter, we also study the performance of OOSM update associated with the Probabilistic Data Association (PDA) algorithm

    Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels

    Full text link
    This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.Comment: To appear in the IEEE Transactions on Signal Processin

    Fast exact variable order affine projection algorithm

    Full text link
    Variable order affine projection algorithms have been recently presented to be used when not only the convergence speed of the algorithm has to be adjusted but also its computational cost and its final residual error. These kind of affine projection (AP) algorithms improve the standard AP algorithm performance at steady state by reducing the residual mean square error. Furthermore these algorithms optimize computational cost by dynamically adjusting their projection order to convergence speed requirements. The main cost of the standard AP algorithm is due to the matrix inversion that appears in the coefficient update equation. Most efforts to decrease the computational cost of these algorithms have focused on the optimization of this matrix inversion. This paper deals with optimization of the computational cost of variable order AP algorithms by recursive calculation of the inverse signal matrix. Thus, a fast exact variable order AP algorithm is proposed. Exact iterative expressions to calculate the inverse matrix when the algorithm projection order either increases or decreases are incorporated into a variable order AP algorithm leading to a reduced complexity implementation. The simulation results show the proposed algorithm performs similarly to the variable order AP algorithms and it has a lower computational complexity. © 2012 Elsevier B.V. All rights reserved.Partially supported by TEC2009-13741, PROMETEO 2009/0013, GV/ 2010/027, ACOMP/2010/006 and UPV PAID-06-09.Ferrer Contreras, M.; Gonzalez, A.; Diego Antón, MD.; Piñero Sipán, MG. (2012). Fast exact variable order affine projection algorithm. Signal Processing. 92(9):2308-2314. https://doi.org/10.1016/j.sigpro.2012.03.007S2308231492

    Adaptive DS-CDMA multiuser detection for time variant frequency selective Rayleigh fading channel

    Get PDF
    The current digital wireless mobile system such as IS-95, which is based on direct sequence Code Division Multiple Access (DS-CDMA) technology, will not be able to meet the growing demands for multimedia service due to low information exchanging rate. Its capacity is also limited by multiple accessed interference (MAI) signals. This work focuses on the development of adaptive algorithms for multiuser detection (MUD) and interference suppression for wideband direct sequence code division multiple access (DS-CDMA) systems over time-variant frequency selective fading channels. In addition, channel acquisition and delay estimation techniques are developed to combat the uncertainty introduced by the wireless propagation channel. This work emphasizes fast and simple techniques that can meet practical needs for high data rate signal detection. Most existing literature is not suitable for the large delay spread in wideband systems due to high computational/ hardware complexity. A de-biasing decorrelator is developed whose computational complexity is greatly reduced without sacrificing performance. An adaptive bootstrap symbolbased signal separator is also proposed for a time-variant channel. These detectors achieve MUD for asynchronous, large delay spread, fading channels without training sequences. To achieve high data rate communication, a finite impulse response (FIR) filter based detector is presented for M-ary QAM modulated signals in a multipath Rayleigh fading channel. It is shown that the proposed detector provides a stable performance for QAM signal detection with unknown fading and phase shift. It is also shown that this detector can be easily extended to the reception of any M-ary quadrature modulated signal. A minimum variance decorrelating (MVD) receiver with adaptive channel estimator is presented in this dissertation. It provides comparable performance to a linear MMSE receiver even in a deep fading environment and can be implemented blindly. Using the MVD receiver as a building-block, an adaptive multistage parallel interference cancellation (PIC) scheme and a successive interference cancellation (SIC) scheme were developed. The total number of stages is kept at a minimum as a result of the accurate estimating of the interfering users at the earliest stages, which reduces the implementation complexity, as well as the processing delay. Jointly with the MVD receiver, a new transmit diversity (TD) scheme, called TD-MVD, is proposed. This scheme improves the performance without increasing the bandwidth. Unlike other TD techniques, this TDMVD scheme has the inherent advantage to overcome asynchronous multipath transmission. It brings flexibility in the design of TD antenna systems without restrict signal coordination among those multiple transmissions, and applicable for both existing and next generation of CDMA systems. A maximum likelihood based delay and channel estimation algorithm with reduced computational complexity is proposed. This algorithm uses a diagonal simplicity technique as well as the asymptotically uncorrelated property of the received signal in the frequency domain. In combination with oversampling, this scheme does not suffer from a singularity problem and the performance quickly approaches the Cramer-Rao lower bound (CRLB) while maintaining a computational complexity that is as low as the order of the signal dimension

    Computation and Time constraints in Localization and Mapping Problems

    Get PDF
    Research on simultaneous localization and mapping problems has been extensively carried out by robotics community in the last decade and several subproblems –like data association, map representation, dynamic environments or semantic mapping– have been more or less deeply investigated. One of the most important questions is the online execution of localization and mapping methods. Since observations are periodically captured by robot sensors, localization and mapping algorithms are constrained to complete the execution of an update before a new observation is available. In literature, several partial contributions have been presented, most of them focused on the reduction of computational complexity, but no comprehensive discussion of real-time feasibility had been previously proposed. The reasons that make real-time feasibility difficult are different in the case of localization and of mapping problems, but a general criterion can be found. In this thesis we claim that a locality principle is a general design criterion for real-time or incremental execution of localization and mapping algorithms. The probabilistic robotics paradigm provides a unified formulation for the different problems and a conceptual framework for the application of the proposed criterion. Locality may be applied to perform temporal or spatial decomposition of the global estimation. This thesis provides a general perspective of real-time feasibility and the identification of locality principle as a general design criterion for algorithms to meet time constraints. The particular contributions of this thesis correspond to the application of the locality principle to specific problems. The Real-Time Particle Filter is an advanced version of Particle Filter algorithm conceived to achieve a tradeoff between time constraints and filter accuracy depending on the number of samples. This goal is achieved by partitioning the overall samples required to obtain the required accuracy into sets, each of them corresponding to an observation, and by reconstructing the new set at the end of an estimation window. We proposed two main contributions: first, an analysis of the efficiency of the resampling solution of the Real-Time Particle Filter through the concept of effective sample size; second, a method to compute the mixture weights that balances the the effective sample size of partition sets and is less prone to numerical instability. The second specific contribution is an incremental version of a maximum likelihood map estimator. The adopted technique combines stochastic gradient descent and incremental tree parameterization and exploits an efficient optimization technique and organizes the graph into a spanning tree structure suitable for a decomposition. In this thesis, the incremental version of the original algorithm has been adapted using again the locality principle. Local decomposition is achieved selecting the portion of the network perturbed by the addition of a new constraint. Furthermore, the perturbation of gradient descent iteration is limited for the region already converged by adapting the learning rate. Finally, optimization is scheduled with an heuristic rule that controls the error increase in the constraint network. The constraint solver has been integrated with a map builder that extracts the constraint network from laser scans and represents the environment with a metric-topological hybrid map. While real-time feasibility is not granted, the proposed incremental tree network optimizer is suitable for online execution and the algorithm converges faster than the previous version of the same algorithm and in several condition performs better than other state-of-the-art methods. The final contribution is a parallel maximum likelihood algorithm for robot mapping. The proposed algorithm estimates the map iterating a linearization step and the solution of the linear system with Gauss-Seidel relaxation. The network is divided in connected clusters of local nodes and the reorder induced by this decomposition transforms the linearized information matrix in block-border diagonal form. Each diagonal block of the matrix can then be solved independently. The proposed parallel maximum likelihood algorithm can exploit the computation resources provided by commodity multi-core processor. Moreover, this solution can be applied to multi-robot mapping. The contributions presented in this dissertation outline a novel perspective on real-time feasibility of robot localization and mapping methods, thus bringing these algorithmic techniques closer to applications
    corecore