3 research outputs found

    Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization

    Get PDF
    Keeping a vehicle well-localized within a prebuilt-map is at the core of any autonomous vehicle navigation system. In this work, we show that both standard SIR sampling and rejection-based optimal sampling are suitable for efficient (10 to 20 ms) real-time pose tracking without feature detection that is using raw point clouds from a 3D LiDAR. Motivated by the large amount of information captured by these sensors, we perform a systematic statistical analysis of how many points are actually required to reach an optimal ratio between efficiency and positioning accuracy. Furthermore, initialization from adverse conditions, e.g., poor GPS signal in urban canyons, we also identify the optimal particle filter settings required to ensure convergence. Our findings include that a decimation factor between 100 and 200 on incoming point clouds provides a large savings in computational cost with a negligible loss in localization accuracy for a VLP-16 scanner. Furthermore, an initial density of ∼2 particles/m 2 is required to achieve 100% convergence success for large-scale (∼100,000 m 2 ), outdoor global localization without any additional hint from GPS or magnetic field sensors. All implementations have been released as open-source software

    Robust unsupervised learning using kernels

    Get PDF
    This thesis aims to study deep connections between statistical robustness and machine learning techniques, in particular, the relationship between some particular kernel (the Gaussian kernel) and the robustness of kernel-based learning methods that use it. This thesis also presented that estimating the mean in the feature space with the RBF kernel, is like doing robust estimation of the mean in the data space with the Welsch M-estimator. Based on these ideas, new robust kernel to machine learning algorithms are designed and implemented in the current thesis: Tukey’s, Andrews’ and Huber’s robust kernels which each one corresponding to Tukey’s, Andrews’ and Huber’s M-robust estimator, respectively. On the one hand, kernel-based algorithms are an important tool which is widely applied to different machine learning and information retrieval problems including: clustering, latent topic analysis, recommender systems, image annotation, and contentbased image retrieval, amongst others. Robustness is the ability of a statistical estimation method or machine learning method to deal with noise and outliers. There is a strong theory of robustness in statistics; however, it receives little attention in machine learning. A systematic evaluation is performed in order to evaluate the robustness of kernel-based algorithms in clustering showing that some robust kernels including Tukey’s and Andrews’ robust kernels perform on par to state-of-the-art algorithmsResumen: Esta tesis apunta a mostrar la profunda relación que existe entre robustez estadística y técnicas de aprendizaje de maquina, en particular, la relación entre algunos tipos de kernels (kernel Gausiano) y la robustez de los métodos basados en kernels. Esta tesis también presenta que la estimación de la media en el espacio de características con el kernel rbf, es como hacer estimación de la media en el espacio de los datos con el m-estimador de Welsch. Basado en las ideas anteriores, un conjunto de nuevos kernel robustos son propuestos y diseñdos: Tukey, Andrews, y Huber kernels robustos correspondientes a los m-estimadores de Tukey, Andrews y Huber respectivamente. Por un lado, los algoritmos basados en kernel es una importante herramienta aplicada en diferentes problemas de aprendizaje automático y recuperación de información, incluyendo: el agrupamiento, análisis de tema latente, sistemas de recomendación, anotación de imágenes, recuperación de informacion, entre otros. La robustez es la capacidad de un método o procedimiento de estimación aprendizaje estadístico automatico para lidiar con el ruido y los valores atípicos. Hay una fuerte teoría de robustez en estadística, sin embargo, han recibido poca atención en aprendizaje de máquina. Una evaluación sistemática se realiza con el fin de evaluar la robustez de los algoritmos basados en kernel en tareas de agrupación mostrando que algunos kernels robustos incluyendo los kernels de Tukey y de Andrews se desempeñan a la par de los algoritmos del estado del arte.Maestrí

    Unconstrained Learning Machines

    Get PDF
    With the use of information technology in industries, a new need has arisen in analyzing large scale data sets and automating data analysis that was once performed by human intuition and simple analog processing machines. The new generation of computer programs now has to outperform their predecessors in detecting complex and non-trivial patterns buried in data warehouses. Improved Machines Learning (ML) techniques such as Neural Networks (NNs) and Support Vector Machines (SVMs) have shown remarkable performances on supervised learning problems for the past couple of decades (e.g. anomaly detection, classification and identification, interpolation and extrapolation, etc.).Nevertheless, many such techniques have ill-conditioned structures which lack adaptability for processing exotic data or very large amounts of data. Some techniques cannot even process data in an on-line fashion. Furthermore, as the processing power of computers increases, there is a pressing need for ML algorithms to perform supervised learning tasks in less time than previously required over even larger sets of data, which means that time and memory complexities of these algorithms must be improved.The aims of this research is to construct an improved type of SVM-like algorithms for tasks such as nonlinear classification and interpolation that is more scalable, error-tolerant and accurate. Additionally, this family of algorithms must be able to compute solutions in a controlled timing, preferably small with respect to modern computational technologies. These new algorithms should also be versatile enough to have useful applications in engineering, meteorology or quality control.This dissertation introduces a family of SVM-based algorithms named Unconstrained Learning Machines (ULMs) which attempt to solve the robustness, scalability and timing issues of traditional supervised learning algorithms. ULMs are not based on geometrical analogies (e.g. SVMs) or on the replication of biological models (e.g. NNs). Their construction is strictly based on statistical considerations taken from the recently developed statistical learning theory. Like SVMs, ULMS are using kernel methods extensively in order to process exotic and/or non-numerical objects stored in databases and search for hidden patterns in data with tailored measures of similarities.ULMs are applied to a variety of problems in manufacturing engineering and in meteorology. The robust nonlinear nonparametric interpolation abilities of ULMs allow for the representation of sub-millimetric deformations on the surface of manufactured parts, the selection of conforming objects and the diagnostic and modeling of manufacturing processes. ULMs play a role in assimilating the system states of computational weather models, removing the intrinsic noise without any knowledge of the underlying mathematical models and helping the establishment of more accurate forecasts
    corecore