77,707 research outputs found

    Novel hybridized computational paradigms integrated with five stand-alone algorithms for clinical prediction of HCV status among patients: A data-driven technique

    Get PDF
    The emergence of health informatics opens new opportunities and doors for different disease diagnoses. The current work proposed the implementation of five different stand-alone techniques coupled with four different novel hybridized paradigms for the clinical prediction of hepatitis C status among patients, using both sociodemographic and clinical input variables. Both the visualized and quantitative performances of the stand-alone algorithms present the capability of the Gaussian process regression (GPR), Generalized neural network (GRNN), and Interactive linear regression (ILR) over the Support Vector Regression (SVR) and Adaptive neuro-fuzzy inference system (ANFIS) models. Hence, due to the lower performance of the stand-alone algorithms at a certain point, four different novel hybrid data intelligent algorithms were proposed, including: interactive linear regression-Gaussian process regression (ILR-GPR), interactive linear regression-generalized neural network (ILR-GRNN), interactive linear regression-Support Vector Regression (ILR-SVR), and interactive linear regression-adaptive neuro-fuzzy inference system (ILR-ANFIS), to boost the prediction accuracy of the stand-alone techniques in the clinical prediction of hepatitis C among patients. Based on the quantitative prediction skills presented by the novel hybridized paradigms, the proposed techniques were able to enhance the performance efficiency of the single paradigms up to 44% and 45% in the calibration and validation phases, respectively.Operational Research Centre in Healthcare, Near East University, North Cyprus, Mersin-10, Turkiy

    A Hybrid Approach to Privacy-Preserving Federated Learning

    Full text link
    Federated learning facilitates the collaborative training of models without the sharing of raw data. However, recent attacks demonstrate that simply maintaining data locality during training processes does not provide sufficient privacy guarantees. Rather, we need a federated learning system capable of preventing inference over both the messages exchanged during training and the final trained model while ensuring the resulting model also has acceptable predictive accuracy. Existing federated learning approaches either use secure multiparty computation (SMC) which is vulnerable to inference or differential privacy which can lead to low accuracy given a large number of parties with relatively small amounts of data each. In this paper, we present an alternative approach that utilizes both differential privacy and SMC to balance these trade-offs. Combining differential privacy with secure multiparty computation enables us to reduce the growth of noise injection as the number of parties increases without sacrificing privacy while maintaining a pre-defined rate of trust. Our system is therefore a scalable approach that protects against inference threats and produces models with high accuracy. Additionally, our system can be used to train a variety of machine learning models, which we validate with experimental results on 3 different machine learning algorithms. Our experiments demonstrate that our approach out-performs state of the art solutions
    corecore