21 research outputs found
Analysis of Rank Aggregation Techniques for Rank Based on the Feature Selection Technique
In order to improve classification accuracy and lower future computation and data collecting costs, feature selection is the process of choosing the most crucial features from a group of attributes and removing the less crucial or redundant ones. To narrow down the features that need to be analyzed, a variety of feature selection procedures have been detailed in published publications. Chi-Square (CS), IG, Relief, GR, Symmetrical Uncertainty (SU), and MI are six alternative feature selection methods used in this study. The provided dataset is aggregated using four rank aggregation strategies: "rank aggregation," "Borda Count (BC) methodology," "score and rank combination," and "unified feature scoring" based on the outcomes of the six feature selection method (UFS). These four procedures by themselves were unable to generate a clear selection rank for the characteristic. To produce different ranks of traits, this ensemble of aggregating ranks is carried out. For this, the bagging method of majority voting was applied
Optimising air quality prediction in smart cities with hybrid particle swarm optimizationâlongâshort term memoryârecurrent neural network model
In smart cities, air pollution is a critical issue that affects individual health and harms the environment. The air pollution prediction can supply important information to all relevant parties to take appropriate initiatives. Air quality prediction is a hot area of research. The existing research encounters several challenges that is, poor accuracy and incorrect realâtime updates. This research presents a hybrid model based on longâshort term memory (LSTM), recurrent neural network (RNN), and Curiosityâbased Motivation method. The proposed model extracts a feature set from the training dataset using an RNN layer and achieves sequencing learning by applying an LSTM layer. Also, to deal with the overfitting issues in LSTM, the proposed model utilises a dropout strategy. In the proposed model, input and recurrent connections can be dropped from activation and weight updates using the dropout regularisation approach, and it utilises a Curiosityâbased Motivation model to construct a novel motivational model, which helps in the reconstruction of long shortâterm memory recurrent neural network. To minimise the prediction error, particle swarm optimisation is implemented to optimise the LSTM neural network's weights. The authors utilise an online Air Pollution Monitoring dataset from Salt Lake City, USA with five air quality indicators for comparison, that is, SO2, CO, O3, and NO2, to predict air quality. The proposed model is compared with existing Gradient Boosted Tree Regression, Existing LSTM, and Support Vector Machine based Regression Model. Experimental analysis shows that the proposed method has 0.0184 (Root Mean Square Error (RMSE)), 0.0082 (Mean Absolute Error), 2002*109 (Mean Absolute Percentage Error), and 0.122 (R2âScore). The experimental findings demonstrate that the proposed LSTM model had RMSE performance in the prescribed dataset and statistically significant superior outcomes compared to existing methods
Enhanced cardiovascular disease prediction through self-improved Aquila optimized feature selection in quantum neural network & LSTM model
IntroductionCardiovascular disease (CVD) stands as a pervasive catalyst for illness and mortality on a global scale, underscoring the imperative for sophisticated prediction methodologies within the ambit of healthcare data analysis. The vast volume of medical data available necessitates effective data mining techniques to extract valuable insights for decision-making and prediction. While machine learning algorithms are commonly employed for CVD diagnosis and prediction, the high dimensionality of datasets poses a performance challenge.MethodsThis research paper presents a novel hybrid model for predicting CVD, focusing on an optimal feature set. The proposed model encompasses four main stages namely: preprocessing, feature extraction, feature selection (FS), and classification. Initially, data preprocessing eliminates missing and duplicate values. Subsequently, feature extraction is performed to address dimensionality issues, utilizing measures such as central tendency, qualitative variation, degree of dispersion, and symmetrical uncertainty. FS is optimized using the self-improved Aquila optimization approach. Finally, a hybridized model combining long short-term memory and a quantum neural network is trained using the selected features. An algorithm is devised to optimize the LSTM modelâs weights. Performance evaluation of the proposed approach is conducted against existing models using specific performance measures.ResultsFar dataset-1, accuracy-96.69%, sensitivity-96.62%, specifity-96.77%, precision-96.03%, recall-97.86%, F1-score-96.84%, MCC-96.37%, NPV-96.25%, FPR-3.2%, FNR-3.37% and for dataset-2, accuracy-95.54%, sensitivity-95.86%, specifity-94.51%, precision-96.03%, F1-score-96.94%, MCC-93.03%, NPV-94.66%, FPR-5.4%, FNR-4.1%. The findings of this study contribute to improved CVD prediction by utilizing an efficient hybrid model with an optimized feature set.DiscussionWe have proven that our method accurately predicts cardiovascular disease (CVD) with unmatched precision by conducting extensive experiments and validating our methodology on a large dataset of patient demographics and clinical factors. QNN and LSTM frameworks with Aquila feature tuning increase forecast accuracy and reveal cardiovascular risk-related physiological pathways. Our research shows how advanced computational tools may alter sickness prediction and management, contributing to the emerging field of machine learning in healthcare. Our research used a revolutionary methodology and produced significant advances in cardiovascular disease prediction
Study of Integrity Based Algorithm in Decentralized Cloud Computing Environment
Cloud computing is getting popularity day by day especially in business people. Many of the people are
getting attracted towards cloud computing services. Itâs very easy to mange and independent in terms of location and
device. Mostly the business people are seeking the high security model keeping their information more secured and risk
protected. Data availability is also an important aspect because our all operations will be performed online data that is
placed on the cloud. Data is stored in distributed manner on the server and client does nât maintains the local copy of
data so integrity of data becomes a more challenging factor. In this paper we will try to identify the issues and
solutions to overcome the problem. This paper also contains find the latest techniques that are used to check the
integrity of data with various different algorithm
Efficient Method for Wimax Soft Handover in VOIP and IPTV
Microwave access. WiMax is a new technology that gives the fast access to data even on long distances through various ways for point to point communication and also give full coverage to the wide areas for cellular type of communication. The main goal of WiMax in cellular system is to make the handovers faster and efficient so that there will be no loss of data during a handover. In this paper we will consider soft handover that uses WiMax technology under the implementation of real time applications like VOIP and IPTV. VOIP and IPTV are the protocols that are used in the wireless communication weather for the voice calls or some other transfer of data but in large amount. So these protocols always create a kind of congestion in network. We need to consider both of these protocols during a soft handover. Basically this paper is about to carry out an efficient soft handover method called beta under various conditions. This method has the capability to carry out an efficient soft handover by checking the different network conditions like distance, congestion and signaling. Then choose a target base station that is capable to carry out the handover. As WiMax is used then the handover will be fast and more efficient. It will give 90 % efficient results
Predicting Fraud in Financial Payment Services through Optimized Hyper-Parameter-Tuned XGBoost Model
Online transactions, medical services, financial transactions, and banking all have their share of fraudulent activity. The annual revenue generated by fraud exceeds $1 trillion. Even while fraud is dangerous for organizations, it may be uncovered with the help of intelligent solutions such as rules engines and machine learning. In this research, we introduce a unique hybrid technique for identifying financial payment fraud by combining nature-inspired-based Hyperparameter tuning with several supervised classifier models, as implemented in a modified version of the XGBoost Algorithm. At the outset, we split out a sample of the full financial payment dataset to use as a test set. We use 70% of the data for training and 30% for testing. Records that are known to be illegitimate or fraudulent are predicted, while those that raise suspicion are further investigated using a number of machine learning algorithms. The models are trained and validated using the 10-fold cross-validation technique. Several tests using a dataset of actual financial transactions are used to demonstrate the effectiveness of the proposed approach
Energy Efficient Routing Protocol in Novel Schemes for Performance Evaluation
Wireless sensor networks (WSNs) are a comparatively new revolutionary technology that has the potential to revolutionize how we live together with the present system. To enhance data archiving, WSNs are frequently used in scientific studies. Many applications have proved the value of wired sensors; however, they are prone to wire cutting or damage. While preventing wire tangles and damage, wireless sensor networks provide autonomous monitoring. The WS network suffers from a number of fundamental restrictions, including insufficient processing power, storage space, available bandwidth, and information exchange. Consequently, energy-efficient strategies are necessary for maximizing the performance and lifespan of WSNs. As a result, the special cluster head relay node and energy balancing techniques will be applied to deal with WSN energy consumptions. This extends the life of the network. In wireless sensor networks, clustering is a smart approach to reduce energy consumption. Energy scarcity and consumption are serious issues that must be addressed with effective and dependable solutions. The proposed MGSA considers the distance between each node and its corresponding CHs, as well as the residual energy and delay, as important factors in the relay node selection. The proposed approach outperforms the current methods, such as low-energy adaptive clustering hierarchy, LEACH (in terms of data delivery rate), energy efficiency, and network longevity. The next level, which will boost the efficiency of wireless sensor networks, with two fitness functions, is proposed. The cluster head (CH) is in charge of collecting and transmitting data from all other cluster nodes. The flow of the consistency of the cluster head selection process will beat the improved data delivery rate, energy efficiency, recommended fuzzy clustering performance experiments, and assessments. As a result, energy-efficient operations are necessary to maximize the WSN performance and lifespan
Energy Efficient Routing Protocol in Novel Schemes for Performance Evaluation
Wireless sensor networks (WSNs) are a comparatively new revolutionary technology that has the potential to revolutionize how we live together with the present system. To enhance data archiving, WSNs are frequently used in scientific studies. Many applications have proved the value of wired sensors; however, they are prone to wire cutting or damage. While preventing wire tangles and damage, wireless sensor networks provide autonomous monitoring. The WS network suffers from a number of fundamental restrictions, including insufficient processing power, storage space, available bandwidth, and information exchange. Consequently, energy-efficient strategies are necessary for maximizing the performance and lifespan of WSNs. As a result, the special cluster head relay node and energy balancing techniques will be applied to deal with WSN energy consumptions. This extends the life of the network. In wireless sensor networks, clustering is a smart approach to reduce energy consumption. Energy scarcity and consumption are serious issues that must be addressed with effective and dependable solutions. The proposed MGSA considers the distance between each node and its corresponding CHs, as well as the residual energy and delay, as important factors in the relay node selection. The proposed approach outperforms the current methods, such as low-energy adaptive clustering hierarchy, LEACH (in terms of data delivery rate), energy efficiency, and network longevity. The next level, which will boost the efficiency of wireless sensor networks, with two fitness functions, is proposed. The cluster head (CH) is in charge of collecting and transmitting data from all other cluster nodes. The flow of the consistency of the cluster head selection process will beat the improved data delivery rate, energy efficiency, recommended fuzzy clustering performance experiments, and assessments. As a result, energy-efficient operations are necessary to maximize the WSN performance and lifespan