3 research outputs found

    Testing Equality of Multiple Population Means under Contaminated Normal Model Using the Density Power Divergence

    Get PDF
    This paper considers the problem of comparing several means under the one-way Analysis of Variance (ANOVA) setup. In ANOVA, outliers and heavy-tailed error distribution can seriously hinder the treatment effect, leading to false positive or false negative test results. We propose a robust test of ANOVA using an M-estimator based on the density power divergence. Compared with the existing robust and non-robust approaches, the proposed testing procedure is less affected by data contamination and improves the analysis. The asymptotic properties of the proposed test are derived under some regularity conditions. The finite-sample performance of the proposed test is examined via a series of Monte-Carlo experiments and two empirical data examples-bone marrow transplant dataset and glucose level dataset. The results produced by the proposed testing procedure are favorably compared with the classical ANOVA and robust tests based on Huber's M-estimator and Tukey's MM-estimator

    Robust Penalized Density Power Divergence Regression With SCAD Penalty for High Dimensional Data Analysis

    No full text
    Amidst the exponential surge in big data, managing high-dimensional datasets across diverse fields and industries has emerged as a significant challenge. Conventional statistical methods struggle to handle their complexity, making analysis intricate. In response, we’ve formulated a robust estimator tailored to counter outliers and heavy-tailed errors. Our approach integrates the SCAD penalty into the Density Power Divergence method, effectively reducing insignificant coefficients to zero. This enhances analysis precision and result reliability. We benchmark our robust and penalized model against existing techniques like Huber, Tukey, LASSO, LAD, and LAD-LASSO. Employing both simulated and UCI machine learning repository datasets, we assess method performance using RMPE, Sensitivity, Specificity, and Mean Dimension reduction. In simulations, BIC(DPD) and EBIC(DPD) consistently yielded the lowest RMPE values for outlier proportions (0%, 5%, 10%) and signal-to-noise ratios (0.5, 1, 5), with sample size increasing from 100 to 500. Cp(DPD) exhibited strong sensitivity. Our model, Cp(DPD), surpassed LASSO and LAD-LASSO in achieving dimension reduction within high-dimensional data. While constrained by computational complexity, our model’s predictor inclusion was limited. Future research should expand this aspect, validating established methods against our innovation, the Robust Penalized Density Power Divergence Regression with SCAD penalty

    Robust Penalized Density Power Divergence Regression With Scad Penalty For High Dimensional Data Analysis

    No full text
    Amidst the exponential surge in big data, managing high-dimensional datasets across diverse fields and industries has emerged as a significant challenge. Conventional statistical methods struggle to handle their complexity, making analysis intricate. In response, we\u27ve formulated a robust estimator tailored to counter outliers and heavy-tailed errors. Our approach integrates the SCAD penalty into the Density Power Divergence method, effectively reducing insignificant coefficients to zero. This enhances analysis precision and result reliability.We benchmark our robust and penalized model against existing techniques like Huber, Tukey, LASSO, LAD, and LAD-LASSO. Employing both simulated and UCI machine learning repository datasets, we assess method performance using RMPE, Sensitivity, Specificity, and Mean Dimension reduction. In simulations, BIC(DPD) and EBIC(DPD) consistently yielded the lowest RMPE values for outlier proportions (0\%, 5\%, 10\%) and signal-to-noise ratios (0.5, 1, 5), with sample size increasing from 100 to 500. Cp(DPD) exhibited strong sensitivity. Our model, Cp(DPD), surpassed LASSO and LAD-LASSO in achieving dimension reduction within high-dimensional data. While constrained by computational complexity, our model\u27s predictor inclusion was limited. Future research should expand this aspect, validating established methods against our innovation, the Robust Penalized Density Power Divergence Regression with SCAD penalty
    corecore