19 research outputs found

    Enhancing precision in population variance vector estimation: a two-phase sampling approach with multi-auxiliary information

    Get PDF
    To enhance precision in estimating unknown population parameters, an auxiliary variable is often used. However, in scenarios where required information on an auxiliary variable is partially or fully unavailable, two-phase sampling is commonly employed. The challenge of estimating the variance vector using multi-auxiliary variables is a less explored area in current literature. This paper addresses the estimation of vector of unknown population variances for multiple study variables by using an estimated vector of variances derived from multi-auxiliary information. This approach is particularly relevant when population variances for the multi-auxiliary variables are not known prior to the survey. The paper introduces a generalized variance and a vector of biases for the proposed multivariate estimator. Special cases of the proposed multivariate variance estimator are provided, accompanied by expressions for mean square errors. Theoretical mathematical conditions are discussed to guide the preference for the proposed estimator. Through the analysis of real-world application-based data, the applicability and efficiency of the proposed multivariate variance estimator are demonstrated, outperforming modified versions of multivariate variance estimators. Additionally, a simulation study validates the superior performance of the proposed estimator compared to its modified estimators

    Bayesian regression modeling and inference of energy efficiency data: the effect of collinearity and sensitivity analysis

    Get PDF
    The majority of research predicted heating demand using linear regression models, but they did not give current building features enough context. Model problems such as Multicollinearity need to be checked and appropriate features must be chosen based on their significance to produce accurate load predictions and inferences. Numerous building energy efficiency features correlate with each other and with heating load in the energy efficiency dataset. The standard Ordinary Least Square regression has a problem when the dataset shows Multicollinearity. Bayesian supervised machine learning is a popular method for parameter estimation and inference when frequentist statistical assumptions fail. The prediction of the heating load as the energy efficiency output with Bayesian inference in multiple regression with a collinearity problem needs careful data analysis. The parameter estimates and hypothesis tests were significantly impacted by the Multicollinearity problem that occurred among the features in the building energy efficiency dataset. This study demonstrated several shrinkage and informative priors on likelihood in the Bayesian framework as alternative solutions or remedies to reduce the collinearity problem in multiple regression analysis. This manuscript tried to model the standard Ordinary Least Square regression and four distinct Bayesian regression models with several prior distributions using the Hamiltonian Monte Carlo algorithm in Bayesian Regression Modeling using Stan and the package used to fit linear models. Several model comparison and assessment methods were used to select the best-fit regression model for the dataset. The Bayesian regression model with weakly informative prior is the best-fitted model compared to the standard Ordinary Least Squares regression and other Bayesian regression models with shrinkage priors for collinear energy efficiency data. The numerical findings of collinearity were checked using variance inflation factor, estimates of regression coefficient and standard errors, and sensitivity of priors and likelihoods. It is suggested that applied research in science, engineering, agriculture, health, and other disciplines needs to check the Multicollinearity effect for regression modeling for better estimation and inference

    A survey on fractal fractional nonlinear Kawahara equation theoretical and computational analysis

    No full text
    Abstract With the use of the Caputo, Caputo-Fabrizio (CF), and Atangana-Baleanu-Caputo (ABC) fractal fractional differential operators, this study offers a theoretical and computational approach to solving the Kawahara problem by merging Laplace transform and Adomian decomposition approaches. We show the solution’s existence and uniqueness through generalized and advanced version of fixed point theorem. We present a precise and efficient method for solving nonlinear partial differential equations (PDEs), in particular the Kawahara problem. Through careful error analysis and comparison with precise solutions, the suggested method is validated, demonstrating its applicability in solving the nonlinear PDEs. Moreover, the comparative analysis is studied for the considered equation under the aforementioned operators

    Optimizing Clustering Algorithms for Anti-Microbial Evaluation Data: A Majority Score-Based Evaluation of K-Means, Gaussian Mixture Model, and Multivariate T-Distribution Mixtures

    No full text
    This study presents a detailed analysis of the performance of the majority score clustering algorithm on three different datasets of anti-microbial evaluation, namely the minimum inhibitory concentration (MIC) of bacteria, and the antifungal activity of chemical compounds against 4 bacteria (E. coli, P. aeruginosa, S. aureus, S. pyogenes) and 2 fungi (C. albicans, As. fumigatus). Clustering is an unsupervised machine learning method used to group chemical compounds based on their similarity. In this paper, we apply the k-means clustering, Gaussian mixture model (GMM), and mixtures of multivariate t distribution to antibacterial activity datasets. To determine the optimal number of clusters and which clustering algorithm performs best, we use a variety of clustering validation indices (CVIs) which include within sum square (to be minimized), connectivity (to be minimized), Silhouette Width (to be maximized), and the Dunn Index (to be maximized). Based on the majority score clustering algorithm, we conclude that the k-means and mixture of multivariate t-distribution methods perform best in terms of the maximum CVIs, while GMM performs best in terms of the minimum CVIs. K-means clustering and mixture of multivariate t-distribution provide 3 optimal clusters for the anti-microbial evaluation of antibacterial activity dataset and 5 optimal clusters for the MIC bacteria dataset. K-means clustering, mixture of multivariate t-distribution, and GMM provide 3 optimal clusters for both the antibacterial and antifungal activity datasets. K-means clustering algorithm performs the best in terms of the majority-based clustering algorithm. This study may be useful for the pharmaceutical industry, chemists, and medical professionals in the future

    On the development of survey methods for novel mean imputation and its application to abalone data

    No full text
    Non-response in surveys is a common problem faced by surveyors, this results in missing data. Missing values are often omitted when doing any statistical analysis, but this reduces the sample size and consequently decreases the precision of estimates. In such situations, imputation is a commonly used method to deal with missing data, this involves estimating the missing values based on the observed data. In this paper, we propose two new estimators for the finite population mean, formulated using two suggested sampling methods and their associated imputation strategies. We derive the variance of the proposed estimators and obtain conditions under which these estimators are more efficient than existing estimators. We conduct a simulation study to assess the relative efficiency (RE) of the proposed estimators for varying sample sizes, response rates, and ranking criteria. For real-world application, we consider data on measuring the characteristics of abalone. The simulation results demonstrate that the proposed mean estimators based on the suggested imputation methods are more efficient than the existing methods in estimating the mean of the finite population

    Exploring phase space properties of nonlinear Kerr-cavity interacting with a qubit: Spontaneous-emission damping effect

    No full text
    In our work, we have explored the phase space nonlinear coherent-Kerr-cavity nonclassicality and purity loss induced by off-resonant intensity-dependent cavity-qubit interactions under atomic spontaneous-emission dissipation effect. The generation of the cavity purity loss (mixedness) dynamics, by using von Neumann entropy, is linked to the nonclassicality's Wigner distribution dynamics, which is investigated at spastic times chosen based on the von Neumann entropy's cavity dynamics. The temporal evolution of the largest negativity of the Wigner distribution is investigated under the physical system effects of the qubit spontaneous-emission dissipation, the detuning as well as the coherent-Kerr-cavity nonlinearity. The growths and the stability of the purity loss and the Wigner distribution negativity loss have the same behaviors. They depend on the Kerr-like nonlinearity and the dissipation couplings. These losses and their stability can be enhanced by increasing the dissipation coupling. It is found that the nonclassicality's phase space Wigner quasi-probability distribution and the mixedness of the generated coherent-Kerr-cavity states are highly sensitive to the physical system effects

    Statistical inference of joint competing risks models from comparative bathtub shape distributions with hybrid censoring

    No full text
    In reliability engineering studies, the class of lifetime distributions with bathtub-shaped failure rate functions is particularly important since the lifetimes of electro-mechanical, electronic, and mechanical goods are frequently modelled with this characteristic. Comparative competing risk data are obtained in a variety of applications, including engineering, biological, medical, and other related fields. In this study, we adopted the statistical inference of joint distributions with a bathtub shape or increasing failure rate function (Chen distributions). The problem of determining the relative merits of products according to their lifetime duration is discussed under the hybrid type-I censoring scheme. Also, under consideration of independent causes of failure, a survival analysis and the assessment of one risk in the presence of other risks are discussed. The maximum likelihood (ML) method and the Bayes approach relative to the symmetric loss function are used to estimate the model parameters. Additionally, we build the classical confidence intervals (CIs) (asymptotic CI and bootstrap CI) to compare with the Bayes credible intervals. Moreover, theoretical results are evaluated and contrasted using a Monte Carlo simulation study. Finally, for illustration purposes, a real data set obtained from a laboratory experiment is evaluated using the suggested model with some brief comments

    Reliability analysis of the triple modular redundancy system under step-partially accelerated life tests using Lomax distribution

    No full text
    Abstract Triple modular redundancy (TMR) is a robust technique utilized in safety-critical applications to enhance fault-tolerance and reliability. This article focuses on estimating the distribution parameters of a TMR system under step-stress partially accelerated life tests, where each component included in the system follows a Lomax distribution. The study aims to analyze the system’s reliability and mean residual lifetime based on the estimated parameters. Various estimation techniques, including maximum likelihood, percentile, least squares, and maximum product of spacings, are explored. Additionally, the optimal stress change time is determined using two criteria. An illustrative example supported by two actual data sets is presented to showcase the methodology’s application. By conducting Monte Carlo simulations, the assessment of the estimation methods’ effectiveness reveals that the maximum likelihood method outperforms the other three methods in terms of both accuracy and performance, as indicated by the numerical outcomes. This research contributes to the understanding and practical implementation of TMR systems in safety-critical industries, potentially saving lives and preventing catastrophic events

    A New Flexible Four Parameter Bathtub Curve Failure Rate Model, and Its Application to Right-Censored Data

    No full text
    This article introduces a new flexible four parameter distribution by convolution of the exponential and Weibull distribution using the odd function transformation, which offers greater flexibility in terms of fit, its called the modified exponential-Weibull (MEW). The MEW model is designed to provide a more accurate description of failure time data resulting from a system with one or more failure modes and is characterized by a hazard rate (HR) that takes the shape of a bathtub due to its complexity. The moments properties, quantile function, and residual life are derived and discussed. We discussed the HR function and several distributional properties of the MEW model, and applied maximum likelihood and Bayesian techniques to estimate its unknown parameters. The Hamiltonian Monte Carlo (HMC) algorithm is employed to simulate the posterior distributions and verify the MEW Bayes estimators. We examined the behavior of the MEW model on two data sets with bathtub-shaped HR and compare it with five other popular bathtub-shaped methodologies. The results indicate that the MEW model provided the best description of the two failure time data sets, suggesting that the proposed model could be a viable candidate for solving various real-life problems
    corecore