International Journal of Electrical and Computer Engineering (IJECE)
Not a member yet
6007 research outputs found
Sort by
Cloud computing environment based hierarchical anomaly intrusion detection system using artificial neural network
Nowadays, computer technology is essential to everyday life, including banking, education, entertainment, and communication. Network security is essential in the digital era, and detecting intrusion threats is the most difficult problem. As a result, the network is monitored for unusual activity using this hierarchical anomaly intrusion detection system, and when these actions are detected, an alert is generated. This hierarchical anomaly intrusion detection system, which uses artificial neural network (ANN) and is implemented on a cloud computing environment, analyzes data even in the high levels of traffic and protects computer networks and data from malicious activity. As a result, this system shows better detection, accuracy, and precision rates
Integration of web scraping, fine-tuning, and data enrichment in a continuous monitoring context via large language model operations
This paper presents and discusses a framework that leverages large-scale language models (LLMs) for data enrichment and continuous monitoring emphasizing its essential role in optimizing the performance of deployed models. It introduces a comprehensive large language model operations (LLMOps) methodology based on continuous monitoring and continuous improvement of the data, the primary determinant of the model, in order to optimize the prediction of a given phenomenon. To this end, first we examine the use of real-time web scraping using tools such as Kafka and Spark Streaming for data acquisition and processing. In addition, we explore the integration of LLMOps for complete lifecycle management of machine learning models. Focusing on continuous monitoring and improvement, we highlight the importance of this approach for ensuring optimal performance of deployed models based on data and machine learning (ML) model monitoring. We also illustrate this methodology through a case study based on real data from several real estate listing sites, demonstrating how MLflow can be integrated into an LLMOps pipeline to guarantee complete development traceability, proactive detection of performance degradations and effective model lifecycle management
System level optimization of series hybrid electric vehicle through plug-in charging feature using ADVISOR
This research addresses the optimization of series hybrid electric vehicles (SHEVs) to enhance sustainable transportation by integrating a plug-in charging feature. The primary objective is to extend the range and improve battery management. Using MATLAB Simulink and the advanced vehicle simulator (ADVISOR), three SHEVs scenarios were simulated under the urban dynamometer driving system (UDDS) cycle. The study maintains constant parameters for the fuel converter and generator while optimizing the battery and motor controller. Compared to conventional hybrid electric vehicles (HEVs), this optimized SHEVs demonstrates a 17% improvement in battery thermal management and a 13.5% reduction in power losses. Additionally, the plug-in series hybrid electric vehicle (P-SHEVs) configuration shows a 5.26% increase in power output and a 35.71% improvement in the state of charge (SOC) over standard SHEVs configurations. The P-SHEVs design also achieves a 12.20% increase in the UDDS single-cycle range and an 11.5% reduction in fuel consumption. The integration of the electric vehicle (EV) charging feature further enhances the SHEVs, resulting in an 8.33% boost in motor power input and a 6.35% improvement in motor temperature profile, reaching a peak enhancement of 50% (18 kW). It contributes to the field by demonstrating the effectiveness of optimized configurations and the integration of a plug-in charging feature in SHEVs, thereby advancing the capacity of these vehicles to promote greener transportation solutions
Developing an algorithm for the adaptive neural network for direct online speed control of the three-phase induction motor
In this paper, an online adaptive general regression neural network (OAGRNN) is presented as a direct online speed controller for a three-phase induction motor. To keep the induction motor running at its rated speed in real-time and under a variety of load conditions, the speed error and its derivative are continuously measured and fed back to the OAGRNN controller. The OAGRNN controller provides the inverter with the control signal it needs to produce the proper frequency and voltage for the induction motor instantly. Notably, the OAGRNN controller demonstrated remarkable performance without the need for a learning mode; it was able to track the desired motor speed, starting its operation from scratch. A setup utilizing a three-phase induction motor has been developed to show the high capacity of OAGRNN for tracking the desired speed of the motor while subjected to the varied load torque. The performance of OAGRNN is examined in two phases: the MATLAB simulation and the experimental setup. Furthermore, when the OAGRNN performance is compared with that of the proportional integral (PI) controller, it demonstrates its outstanding ability and superiority for online adjustments related to the three-phase induction motor's speed control
Human motion classification by micro-doppler radar using intelligent algorithms
This article introduces a technique for detecting four human movements using micro-doppler radar and intelligent algorithms. Micro-doppler radar exhibits the capability to detect and measure object movements with intricate detail, even capturing complex or non-rigid motions, while accurately identifying direction, velocity, and motion patterns. The application of intelligent algorithms enhances detection efficiency and reduces false alarms by discerning subtle movement patterns, thereby facilitating more accurate detection and a deeper understanding of observed object dynamics. A continuous wave radar setup was implemented utilizing a spectrum analyzer and radio frequency (RF) generator capturing signals in a spectrogram centered at 2,395 MHz. Six models were assessed for image classification: VGG-16, VGG-19, MobileNet, MobileNet V2, Xception, and Inception V3. A dataset comprising 500 images depicting four movements-running, walking, arm raising, and jumping-was curated. Our findings reveal that the most optimal architecture in terms of training time, accuracy, and loss is VGG-16, achieving an accuracy of 96%. Furthermore, precision values of 96%, 100%, and 98% were obtained for the movements of walking, running, and arm raising, respectively. Notably, VGG-16 exhibited a training loss of 4.191E-04, attributed to the utilization of the Adam optimizer with a learning rate of 0.001 over 15 epochs and a batch size of 32
Energy analysis of active photovoltaic cooling system using water flow
An active water-cooling system is one of several technologies that has been proven to be able to reduce heat losses and increase electrical energy in photovoltaic (PV) module. This research discusses a comparative experimental study of three pump activation controls in cooling of PV module with the aim of evaluating specifically the PV output power, net energy gain, water flow rate, and module temperature reduction. The three pump activation controls being compared are continuously active during the test, active based on setpoint temperature, and active by controlling the pump voltage using pulse width modulation (PWM) control in adjusting water flow rate smoothly. The results show that controlling the pump voltage using PWM in the PV cooling process produces energy of 437.95 Wh, slightly lower than the others and the average module cooling temperature is 35.24 ยฐC, higher of 1-3 ยฐC than the others. Nevertheless, PWM control of cooling pump has resulted the percentage of net energy gain of 9.94%, greater than other controls, and with an average flow rate of 2.17 L/min, more efficient than the others. Thus, this control is quite effective as it can produce higher net PV energy yield and lower water consumption
Estimation of the required number of nodes of a university cloud virtualization cluster
When designing a virtual desktop infrastructure (VDI) for a university or inter-university cloud, developers must overcome many complex technical challenges. One of these tasks is estimating the required number of virtualization cluster nodes. Such nodes host virtual machines for users. These virtual machines can be used by students and teachers to complete academic assignments or research work. Another task that arises in the VDI design process is the problem of algorithmizing the placement of virtual machines in a computer network. In this case, optimal placement of virtual machines will reduce the number of computer nodes without affecting functionality. And this, ultimately, helps to reduce the cost of such a solution, which is important for educational institutions. The article proposes a model for estimating the required number of virtualization cluster nodes. The proposed model is based on a combined approach, which involves jointly solving the problem of optimal packaging and finding the configuration of server platforms of a private university cloud using a genetic algorithm. The model introduced in this research is universal. It can be used in the design of university cloud systems for different purposes-for example, educational systems or inter-university scientific laboratory management systems
An improved key scheduling for advanced encryption standard with expanded round constants and non-linear property of cubic polynomials
The advanced encryption standard (AES) offers strong symmetric key encryption, ensuring data security in cloud computing environments during transmission and storage. However, its key scheduling algorithm is known to have flaws, including vulnerabilities to related-key attacks, inadequate nonlinearity, less complicated key expansion, and possible side-channel attack susceptibilities. This study aims to strengthen the independence among round keys generated by the key expansion process of AESโthat is, the value of one round key does not reveal anything about the value of another round keyโby improving the key scheduling process. Data sets of random, low, and high-density initial secret keys were used to evaluate the strength of the improved key scheduling algorithm through the National Institute of Standards and Technology (NIST) frequency test, the avalanche effect, and the Hamming distance between two consecutive round keys. A related-key analysis was performed to assess the robustness of the proposed key scheduling algorithm, revealing improved resistance to key-related cryptanalysis
Advancing network security: a comparative research of machine learning techniques for intrusion detection
In the current digital era, the advancement of network-based technologies has brought a surge in security vulnerabilities, necessitating complex and dynamic defense mechanisms. This paper explores the integration of machine learning techniques within intrusion detection systems (IDS) to tackle the intricacies of modern network threats. A detailed comparative analysis of various algorithms, including k-nearest neighbors (KNN), logistic regression, and perceptron neural networks, is conducted to evaluate their efficiency in detecting and classifying different types of network intrusions such as denial of service (DoS), probe, user to root (U2R), and remote to local (R2L). Utilizing the national software laboratory knowledge discovery and data mining (NSL-KDD) dataset, a standard in the field, the study examines the algorithmsโ ability to identify complex patterns and anomalies indicative of security breaches. Principal component analysis is utilized to streamline the dataset into 20 principal components for data processing efficiency. Results indicate that the neural network model is particularly effective, demonstrating exceptional performance metrics across accuracy, precision, and recall in both training and testing phases, affirming its reliability and utility in IDS. The potential for hybrid models combining different machine learning (ML) strategies is also discussed, highlighting a path towards more robust and adaptable IDS solutions
A novel multi-objective economic load dispatch solution using bee colony optimization method
This article presents a novel multi-objective economic load dispatch solution with the bee colony optimization method. The purposes of this research are to find the lowest total power generation cost and the lowest total power loss at the transmission line. A swarm optimization method was used to consider the non-smooth fuel cost function characteristics of the generator. The constraints of economic load dispatch include the cost function, the limitations of generator operation, power losses, and load demand. The suggested approach evaluates an IEEE 5, 26, and 118 bus system with 3, 6, and 15 generating units at 300, 1,263, and 2,630 megawatt (MW) and uses a simulation running on the MATLAB software to confirm its effectiveness. The outcomes of the simulation are compared with those of the exchange market algorithm, the cuckoo search algorithm, the bat algorithm, the hybrid bee colony optimization, the multi-bee colony optimization, the decentralized approach, the differential evolution, the social spider optimization, and the grey wolf optimization. It demonstrates that the suggested approach may provide a better-quality result faster than the traditional approach