67 research outputs found
Making tradable white certificates trustworthy, anonymous, and efficient using blockchains
Fossil fuel pollution has contributed to dramatic changes in the Earth’s climate, and this trend will continue as fossil fuels are burned at an ever-increasing rate. Many countries around the world are currently making efforts to reduce greenhouse gas emissions, and one of the methods is the Tradable White Certificate (TWC) mechanism. The mechanism allows organizations to reduce their energy consumption to generate energy savings certificates, and those that achieve greater energy savings can sell their certificates to those that fall short. However, there are some challenges to implementing this mechanism, such as the centralized and costly verification and control of energy savings. Moreover, the verification process is not transparent, which could lead to fraud or manipulation of the system. Therefore, in this paper, we propose a blockchain-based TWC mechanism to automatically create, verify, and audit the TWC certificates. In addition, we propose a smart-contract-based TWC trading mechanism that enables traders to trade their TWCs without exposing their private information in an untrusted environment. Evaluations show that the proposed TWC framework is scalable for 1000 TWC traders simultaneously, and optimization problem can be solved in less than 120ms. Moreover, it has been shown that Polygon Matic incurs least gas cost compared to other blockchain-based solutions
Multi-slope path loss model-based performance assessment of heterogeneous cellular network in 5G
The coverage and capacity required for fifth generation (5G) and beyond can be achieved using heterogeneous wireless networks. This exploration set up a limited number of user equipment (UEs) while taking into account the three-dimensional (3D) distance between UEs and base stations (BSs), multi-slope line of sight (LOS) and non-line of sight (n-LOS), idle mode capability (IMC), and third generation partnership projects (3GPP) path loss (PL) models. In the current work, we examine the relationship between the height and gain of the macro (M) and pico (P) base stations (BSs) antennas and the ratio of the density of the MBSs to the PBSs, indicated by the symbol . Recent research demonstrates that the antenna height of PBSs should be kept to a minimum to get the best performance in terms of coverage and capacity for a 5G wireless network, whereas ASE smashes as crosses a specific value in 5G. We aim to address these issues and increased the performance of the 5G network by installing directional antennas at MBSs and omnidirectional antennas at Pico BSs while taking into consideration traditional antenna heights. The authors of this work used the multi-tier 3GPP PL model to take into account real-world scenarios and calculated SINR using average power. This study demonstrates that, when the multi-slope 3GPP PL model is used and directional antennas are installed at MBSs, coverage can be improved 10% and area spectral efficiency (ASE) can be improved 2.5 times over the course of the previous analysis. Similarly to this, the issue of an ASE crash after a base station density of 1000 has been resolved in this study. © 2013 IEEE
An effective solution to the optimal power flow problem using meta-heuristic algorithms
Financial loss in power systems is an emerging problem that needs to be resolved. To tackle the mentioned problem, energy generated from various generation sources in the power network needs proper scheduling. In order to determine the best settings for the control variables, this study formulates and solves an optimal power flow (OPF) problem. In the proposed work, the bird swarm algorithm (BSA), JAYA, and a hybrid of both algorithms, termed as HJBSA, are used for obtaining the settings of optimum variables. We perform simulations by considering the constraints of voltage stability and line capacity, and generated reactive and active power. In addition, the used algorithms solve the problem of OPF and minimize carbon emission generated from thermal systems, fuel cost, voltage deviations, and losses in generation of active power. The suggested approach is evaluated by putting it into use on two separate IEEE testing systems, one with 30 buses and the other with 57 buses. The simulation results show that for the 30-bus system, the minimization in cost by HJBSA, JAYA, and BSA is 860.54 /h and 900.01 /h, 6237.4, /h for HJBSA, JAYA, and BSA, respectively. Similarly, for the 30-bus system, the power loss by HJBSA, JAYA, and BSA is 9.542 MW, 10.102 MW, and 11.427 MW, respectively, while for the 57-bus system, the value of power loss is 13.473 MW, 20.552, MW and 18.638 MW for HJBSA, JAYA, and BSA, respectively. Moreover, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 4.394 ton/h, 4.524, ton/h and 4.401 ton/h, respectively, with the 30-bus system. With the 57-bus system, HJBSA, JAYA, and BSA cause reduction in carbon emissions by 26.429 ton/h, 27.014, ton/h and 28.568 ton/h, respectively. The results show the outperformance of HJBSA
Emotion detection from handwriting and drawing samples using an attention-based transformer model
© 2024 The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/Emotion detection (ED) involves the identification and understanding of an individual’s emotional state through various cues such as facial expressions, voice tones, physiological changes, and behavioral patterns. In this context, behavioral analysis is employed to observe actions and behaviors for emotional interpretation. This work specifically employs behavioral metrics like drawing and handwriting to determine a person’s emotional state, recognizing these actions as physical functions integrating motor and cognitive processes. The study proposes an attention-based transformer model as an innovative approach to identify emotions from handwriting and drawing samples, thereby advancing the capabilities of ED into the domains of fine motor skills and artistic expression. The initial data obtained provides a set of points that correspond to the handwriting or drawing strokes. Each stroke point is subsequently delivered to the attention-based transformer model, which embeds it into a high-dimensional vector space. The model builds a prediction about the emotional state of the person who generated the sample by integrating the most important components and patterns in the input sequence using self-attentional processes. The proposed approach possesses a distinct advantage in its enhanced capacity to capture long-range correlations compared to conventional recurrent neural networks (RNN). This characteristic makes it particularly well-suited for the precise identification of emotions from samples of handwriting and drawings, signifying a notable advancement in the field of emotion detection. The proposed method produced cutting-edge outcomes of 92.64% on the benchmark dataset known as EMOTHAW (Emotion Recognition via Handwriting and Drawing).Peer reviewe
Deep learning-based meta-learner strategy for electricity theft detection
Electricity theft damages power grid infrastructure and is also responsible for huge revenue losses for electric utilities. Integrating smart meters in traditional power grids enables real-time monitoring and collection of consumers’ electricity consumption (EC) data. Based on the collected data, it is possible to identify the normal and malicious behavior of consumers by analyzing the data using machine learning (ML) and deep learning methods. This paper proposes a deep learning-based meta-learner model to distinguish between normal and malicious patterns in EC data. The proposed model consists of two stages. In Fold-0, the ML classifiers extract diverse knowledge and learns based on EC data. In Fold-1, a multilayer perceptron is used as a meta-learner, which takes the prediction results of Fold-0 classifiers as input, automatically learns non-linear relationships among them, and extracts hidden complicated features to classify normal and malicious behaviors. Therefore, the proposed model controls the overfitting problem and achieves high accuracy. Moreover, extensive experiments are conducted to compare its performance with boosting, bagging, standalone conventional ML classifiers, and baseline models published in top-tier outlets. The proposed model is evaluated using a real EC dataset, which is provided by the Energy Informatics Group in Pakistan. The model achieves 0.910 ROC-AUC and 0.988 PR-AUC values on the test dataset, which are higher than those of the compared models
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
© 2024 Tech Science Press. All rights reserved. This is an open access article distributed under the Creative Commons Attribution License, to view a copy of the license, see: https://creativecommons.org/licenses/by/4.0/Software project outcomes heavily depend on natural language requirements, often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements. Researchers are exploring machine learning to predict software bugs, but a more precise and general approach is needed. Accurate bug prediction is crucial for software evolution and user training, prompting an investigation into deep and ensemble learning methods. However, these studies are not generalized and efficient when extended to other datasets. Therefore, this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems. The methods involved feature selection, which is used to reduce the dimensionality and redundancy of features and select only the relevant ones; transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets, and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model. Four National Aeronautics and Space Administration (NASA) and four Promise datasets are used in the study, showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve (AUC-ROC) values when different classifiers were combined. It reveals that using an amalgam of techniques such as those used in this study, feature selection, transfer learning, and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing, useful end mode.Peer reviewe
Enhanced cardiovascular disease prediction through self-improved Aquila optimized feature selection in quantum neural network & LSTM model
IntroductionCardiovascular disease (CVD) stands as a pervasive catalyst for illness and mortality on a global scale, underscoring the imperative for sophisticated prediction methodologies within the ambit of healthcare data analysis. The vast volume of medical data available necessitates effective data mining techniques to extract valuable insights for decision-making and prediction. While machine learning algorithms are commonly employed for CVD diagnosis and prediction, the high dimensionality of datasets poses a performance challenge.MethodsThis research paper presents a novel hybrid model for predicting CVD, focusing on an optimal feature set. The proposed model encompasses four main stages namely: preprocessing, feature extraction, feature selection (FS), and classification. Initially, data preprocessing eliminates missing and duplicate values. Subsequently, feature extraction is performed to address dimensionality issues, utilizing measures such as central tendency, qualitative variation, degree of dispersion, and symmetrical uncertainty. FS is optimized using the self-improved Aquila optimization approach. Finally, a hybridized model combining long short-term memory and a quantum neural network is trained using the selected features. An algorithm is devised to optimize the LSTM model’s weights. Performance evaluation of the proposed approach is conducted against existing models using specific performance measures.ResultsFar dataset-1, accuracy-96.69%, sensitivity-96.62%, specifity-96.77%, precision-96.03%, recall-97.86%, F1-score-96.84%, MCC-96.37%, NPV-96.25%, FPR-3.2%, FNR-3.37% and for dataset-2, accuracy-95.54%, sensitivity-95.86%, specifity-94.51%, precision-96.03%, F1-score-96.94%, MCC-93.03%, NPV-94.66%, FPR-5.4%, FNR-4.1%. The findings of this study contribute to improved CVD prediction by utilizing an efficient hybrid model with an optimized feature set.DiscussionWe have proven that our method accurately predicts cardiovascular disease (CVD) with unmatched precision by conducting extensive experiments and validating our methodology on a large dataset of patient demographics and clinical factors. QNN and LSTM frameworks with Aquila feature tuning increase forecast accuracy and reveal cardiovascular risk-related physiological pathways. Our research shows how advanced computational tools may alter sickness prediction and management, contributing to the emerging field of machine learning in healthcare. Our research used a revolutionary methodology and produced significant advances in cardiovascular disease prediction
A sustainable approach for demand side management considering demand response and renewable energy in smart grids
The development of smart grids has revolutionized modern energy markets, enabling users to participate in demand response (DR) programs and maintain a balance between power generation and demand. However, users’ decreased awareness poses a challenge in responding to signals from DR programs. To address this issue, energy management controllers (EMCs) have emerged as automated solutions for energy management problems using DR signals. This study introduces a novel hybrid algorithm called the hybrid genetic bacteria foraging optimization algorithm (HGBFOA), which combines the desirable features of the genetic algorithm (GA) and bacteria foraging optimization algorithm (BFOA) in its design and implementation. The proposed HGBFOA-based EMC effectively solves energy management problems for four categories of residential loads: time elastic, power elastic, critical, and hybrid. By leveraging the characteristics of GA and BFOA, the HGBFOA algorithm achieves an efficient appliance scheduling mechanism, reduced energy consumption, minimized peak-to-average ratio (PAR), cost optimization, and improved user comfort level. To evaluate the performance of HGBFOA, comparisons were made with other well-known algorithms, including the particle swarm optimization algorithm (PSO), GA, BFOA, and hybrid genetic particle optimization algorithm (HGPO). The results demonstrate that the HGBFOA algorithm outperforms existing algorithms in terms of scheduling, energy consumption, power costs, PAR, and user comfort
An implementation of multiscale line detection and mathematical morphology for efficient and precise blood vessel segmentation in fundus images
Diagnosing various diseases such as glaucoma, age-related macular degeneration, cardiovascular conditions, and diabetic retinopathy involves segmenting retinal blood vessels. The task is particularly challenging when dealing with color fundus images due to issues like non-uniform illumination, low contrast, and variations in vessel appearance, especially in the presence of different pathologies. Furthermore, the speed of the retinal vessel segmentation system is of utmost importance. With the surge of now available big data, the speed of the algorithm becomes increasingly important, carrying almost equivalent weightage to the accuracy of the algorithm. To address these challenges, we present a novel approach for retinal vessel segmentation, leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology. Our algorithm’s performance is evaluated on two publicly available datasets, namely the Digital Retinal Images for Vessel Extraction dataset (DRIVE) and the Structure Analysis of Retina (STARE) dataset. The experimental results demonstrate the effectiveness of our method, with mean accuracy values of 0.9467 for DRIVE and 0.9535 for STARE datasets, as well as sensitivity values of 0.6952 for DRIVE and 0.6809 for STARE datasets. Notably, our algorithm exhibits competitive performance with state-of-the-art methods. Importantly, it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets. It is worth noting that these results were achieved using Matlab scripts containing multiple loops. This suggests that the processing time can be further reduced by replacing loops with vectorization. Thus the proposed algorithm can be deployed in real time applications. In summary, our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field
- …