40 research outputs found

    Correntropy: Answer to non-Gaussian noise in modern SLAM applications?

    Get PDF
    The problem of non-Gaussian noise/outliers has been intrinsic in modern Simultaneous Localization and Mapping (SLAM) applications. Despite numerous algorithms in SLAM, it has become crucial to address this problem in the realm of modern robotics applications. This work focuses on addressing the above-mentioned problem by incorporating the usage of correntropy in SLAM. Before correntropy, multiple attempts of dealing with non-Gaussian noise have been proposed with significant progress over time but the underlying assumption of Gaussianity might not be enough in real-life applications in robotics.Most of the modern SLAM algorithms propose the `best' estimates given a set of sensor measurements. Apart from addressing the non-Gaussian problems in a SLAM system, our work attempts to address the more complex part concerning SLAM: (a) If one of the sensors gives faulty measurements over time (`Faulty' measurements can be non-Gaussian in nature), how should a SLAM framework adapt to such scenarios? (b) In situations where there is a manual intervention or a 3rd party attacker tries to change the measurements and affect the overall estimate of the SLAM system, how can a SLAM system handle such situations?(addressing the Self Security aspect of SLAM). Given these serious situations how should a modern SLAM system handle the issue of the previously mentioned problems in (a) and (b)? We explore the idea of correntropy in addressing the above-mentioned problems in popular filtering-based approaches like Kalman Filters(KF) and Extended Kalman Filters(EKF), which highlights the `Localization' part in SLAM. Later on, we propose a framework of fusing the odometeries computed individually from a stereo sensor and Lidar sensor (Iterative Closest point Algorithm (ICP) based odometry). We describe the effectiveness of using correntropy in this framework, especially in situations where a 3rd party attacker attempts to corrupt the Lidar computed odometry. We extend the usage of correntropy in the `Mapping' part of the SLAM (Registration), which is the highlight of our work. Although registration is a well-established problem, earlier approaches to registration are very inefficient with large rotations and translation. In addition, when the 3D datasets used for alignment are corrupted with non-Gaussian noise (shot/impulse noise), prior state-of-the-art approaches fail. Our work has given birth to another variant of ICP, which we name as Correntropy Similarity Matrix ICP (CoSM-ICP), which is robust to large translation and rotations as well as to shot/impulse noise. We verify through results how well our variant of ICP outperforms the other variants under large rotations and translations as well as under large outliers/non-Gaussian noise. In addition, we deploy our CoSM algorithm in applications where we compute the extrinsic calibration of the Lidar-Stereo sensor as well as Lidar-Camera calibration using a planar checkerboard in a single frame. In general, through results, we verify how efficiently our approach of using correntropy can be used in tackling non-Gaussian noise/shot noise/impulse noise in robotics applications

    Investigation of the performance of multi-input multi-output detectors based on deep learning in non-Gaussian environments

    Get PDF
    The next generation of wireless cellular communication networks must be energy efficient, extremely reliable, and have low latency, leading to the necessity of using algorithms based on deep neural networks (DNN) which have better bit error rate (BER) or symbol error rate (SER) performance than traditional complex multi-antenna or multi-input multi-output (MIMO) detectors. This paper examines deep neural networks and deep iterative detectors such as OAMP-Net based on information theory criteria such as maximum correntropy criterion (MCC) for the implementation of MIMO detectors in non-Gaussian environments, and the results illustrate that the proposed method has better BER or SER performance

    Short-Term Wind Speed Forecasting via Stacked Extreme Learning Machine With Generalized Correntropy

    Get PDF
    Recently, wind speed forecasting as an effective computing technique plays an important role in advancing industry informatics, while dealing with these issues of control and operation for renewable power systems. However, it is facing some increasing difficulties to handle the large-scale dataset generated in these forecasting applications, with the purpose of ensuring stable computing performance. In response to such limitation, this paper proposes a more practical approach through the combination of extreme-learning machine (ELM) method and deep-learning model. ELM is a novel computing paradigm that enables the neural network (NN) based learning to be achieved with fast training speed and good generalization performance. The stacked ELM (SELM) is an advanced ELM algorithm under deep-learning framework, which works efficiently on memory consumption decrease. In this paper, an enhanced SELM is accordingly developed via replacing the Euclidean norm of the mean square error (MSE) criterion in ELM with the generalized correntropy criterion to further improve the forecasting performance. The advantage of the enhanced SELM with generalized correntropy to achieve better forecasting performance mainly relies on the following aspect. Generalized correntropy is a stable and robust nonlinear similarity measure while employing machine learning method to forecast wind speed, where the outliers may exist in some industrially measured values. Specifically, the experimental results of short-term and ultra-short-term forecasting on real wind speed data show that the proposed approach can achieve better computing performance compared with other traditional and more recent methods

    Short-Term Wind Speed Forecasting via Stacked Extreme Learning Machine With Generalized Correntropy

    Get PDF
    Recently, wind speed forecasting as an effective computing technique plays an important role in advancing industry informatics, while dealing with these issues of control and operation for renewable power systems. However, it is facing some increasing difficulties to handle the large-scale dataset generated in these forecasting applications, with the purpose of ensuring stable computing performance. In response to such limitation, this paper proposes a more practical approach through the combination of extreme-learning machine (ELM) method and deep-learning model. ELM is a novel computing paradigm that enables the neural network (NN) based learning to be achieved with fast training speed and good generalization performance. The stacked ELM (SELM) is an advanced ELM algorithm under deep-learning framework, which works efficiently on memory consumption decrease. In this paper, an enhanced SELM is accordingly developed via replacing the Euclidean norm of the mean square error (MSE) criterion in ELM with the generalized correntropy criterion to further improve the forecasting performance. The advantage of the enhanced SELM with generalized correntropy to achieve better forecasting performance mainly relies on the following aspect. Generalized correntropy is a stable and robust nonlinear similarity measure while employing machine learning method to forecast wind speed, where the outliers may exist in some industrially measured values. Specifically, the experimental results of short-term and ultra-short-term forecasting on real wind speed data show that the proposed approach can achieve better computing performance compared with other traditional and more recent methods

    Novel Computational Methods for State Space Filtering

    Get PDF
    The state-space formulation for time-dependent models has been long used invarious applications in science and engineering. While the classical Kalman filter(KF) provides optimal posterior estimation under linear Gaussian models, filteringin nonlinear and non-Gaussian environments remains challenging.Based on the Monte Carlo approximation, the classical particle filter (PF) can providemore precise estimation under nonlinear non-Gaussian models. However, it suffers fromparticle degeneracy. Drawing from optimal transport theory, the stochastic map filter(SMF) accommodates a solution to this problem, but its performance is influenced bythe limited flexibility of nonlinear map parameterisation. To account for these issues,a hybrid particle-stochastic map filter (PSMF) is first proposed in this thesis, wherethe two parts of the split likelihood are assimilated by the PF and SMF, respectively.Systematic resampling and smoothing are employed to alleviate the particle degeneracycaused by the PF. Furthermore, two PSMF variants based on the linear and nonlinearmaps (PSMF-L and PSMF-NL) are proposed, and their filtering performance is comparedwith various benchmark filters under different nonlinear non-Gaussian models.Although achieving accurate filtering results, the particle-based filters require expensive computations because of the large number of samples involved. Instead, robustKalman filters (RKFs) provide efficient solutions for the linear models with heavy-tailednoise, by adopting the recursive estimation framework of the KF. To exploit the stochasticcharacteristics of the noise, the use of heavy-tailed distributions which can fit variouspractical noises constitutes a viable solution. Hence, this thesis also introduces a novelRKF framework, RKF-SGαS, where the signal noise is assumed to be Gaussian and theheavy-tailed measurement noise is modelled by the sub-Gaussian α-stable (SGαS) distribution. The corresponding joint posterior distribution of the state vector and auxiliaryrandom variables is estimated by the variational Bayesian (VB) approach. Four differentminimum mean square error (MMSE) estimators of the scale function are presented.Besides, the RKF-SGαS is compared with the state-of-the-art RKFs under three kinds ofheavy-tailed measurement noises, and the simulation results demonstrate its estimationaccuracy and efficiency.One notable limitation of the proposed RKF-SGαS is its reliance on precise modelparameters, and substantial model errors can potentially impede its filtering performance. Therefore, this thesis also introduces a data-driven RKF method, referred to asRKFnet, which combines the conventional RKF framework with a deep learning technique. An unsupervised scheduled sampling technique (USS) is proposed to improve theistability of the training process. Furthermore, the advantages of the proposed RKFnetare quantified with respect to various traditional RKFs

    Novel Deep Learning Techniques For Computer Vision and Structure Health Monitoring

    Get PDF
    This thesis proposes novel techniques in building a generic framework for both the regression and classification tasks in vastly different applications domains such as computer vision and civil engineering. Many frameworks have been proposed and combined into a complex deep network design to provide a complete solution to a wide variety of problems. The experiment results demonstrate significant improvements of all the proposed techniques towards accuracy and efficiency

    Kernel Truncated Regression Representation for Robust Subspace Clustering

    Get PDF
    Subspace clustering aims to group data points into multiple clusters of which each corresponds to one subspace. Most existing subspace clustering approaches assume that input data lie on linear subspaces. In practice, however, this assumption usually does not hold. To achieve nonlinear subspace clustering, we propose a novel method, called kernel truncated regression representation. Our method consists of the following four steps: 1) projecting the input data into a hidden space, where each data point can be linearly represented by other data points; 2) calculating the linear representation coefficients of the data representations in the hidden space; 3) truncating the trivial coefficients to achieve robustness and block-diagonality; and 4) executing the graph cutting operation on the coefficient matrix by solving a graph Laplacian problem. Our method has the advantages of a closed-form solution and the capacity of clustering data points that lie on nonlinear subspaces. The first advantage makes our method efficient in handling large-scale datasets, and the second one enables the proposed method to conquer the nonlinear subspace clustering challenge. Extensive experiments on six benchmarks demonstrate the effectiveness and the efficiency of the proposed method in comparison with current state-of-the-art approaches.Comment: 14 page

    Mathematics and Digital Signal Processing

    Get PDF
    Modern computer technology has opened up new opportunities for the development of digital signal processing methods. The applications of digital signal processing have expanded significantly and today include audio and speech processing, sonar, radar, and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others. This Special Issue is aimed at wide coverage of the problems of digital signal processing, from mathematical modeling to the implementation of problem-oriented systems. The basis of digital signal processing is digital filtering. Wavelet analysis implements multiscale signal processing and is used to solve applied problems of de-noising and compression. Processing of visual information, including image and video processing and pattern recognition, is actively used in robotic systems and industrial processes control today. Improving digital signal processing circuits and developing new signal processing systems can improve the technical characteristics of many digital devices. The development of new methods of artificial intelligence, including artificial neural networks and brain-computer interfaces, opens up new prospects for the creation of smart technology. This Special Issue contains the latest technological developments in mathematics and digital signal processing. The stated results are of interest to researchers in the field of applied mathematics and developers of modern digital signal processing systems
    corecore