9 research outputs found

    Investigation of the performance of multi-input multi-output detectors based on deep learning in non-Gaussian environments

    Get PDF
    The next generation of wireless cellular communication networks must be energy efficient, extremely reliable, and have low latency, leading to the necessity of using algorithms based on deep neural networks (DNN) which have better bit error rate (BER) or symbol error rate (SER) performance than traditional complex multi-antenna or multi-input multi-output (MIMO) detectors. This paper examines deep neural networks and deep iterative detectors such as OAMP-Net based on information theory criteria such as maximum correntropy criterion (MCC) for the implementation of MIMO detectors in non-Gaussian environments, and the results illustrate that the proposed method has better BER or SER performance

    Restricted Minimum Error Entropy Criterion for Robust Classification

    Full text link
    The minimum error entropy (MEE) criterion has been verified as a powerful approach for non-Gaussian signal processing and robust machine learning. However, the implementation of MEE on robust classification is rather a vacancy in the literature. The original MEE only focuses on minimizing the Renyi's quadratic entropy of the error probability distribution function (PDF), which could cause failure in noisy classification tasks. To this end, we analyze the optimal error distribution in the presence of outliers for those classifiers with continuous errors, and introduce a simple codebook to restrict MEE so that it drives the error PDF towards the desired case. Half-quadratic based optimization and convergence analysis of the new learning criterion, called restricted MEE (RMEE), are provided. Experimental results with logistic regression and extreme learning machine are presented to verify the desirable robustness of RMEE

    Robust Classification via Support Vector Machines

    Get PDF
    Classification models are very sensitive to data uncertainty, and finding robust classifiers that are less sensitive to data uncertainty has raised great interest in the machine learning literature. This paper aims to construct robust support vector machine classifiers under feature data uncertainty via two probabilistic arguments. The first classifier, Single Perturbation, reduces the local effect of data uncertainty with respect to one given feature and acts as a local test that could confirm or refute the presence of significant data uncertainty for that particular feature. The second classifier, Extreme Empirical Loss, aims to reduce the aggregate effect of data uncertainty with respect to all features, which is possible via a trade-off between the number of prediction model violations and the size of these violations. Both methodologies are computationally efficient and our extensive numerical investigation highlights the advantages and possible limitations of the two robust classifiers on synthetic and real-life insurance claims and mortgage lending data, but also the fairness of an automatized decision based on our classifier
    corecore