International Journal of Electrical and Computer Engineering (IJECE)
Not a member yet
6222 research outputs found
Sort by
Tomographic image reconstruction enhancement through median filtering and K-means clustering
Ultrasound tomography is a powerful and widely utilized imaging technique in the field of medical diagnostics. Its non-invasive nature and high sensitivity in detecting small objects make it an invaluable tool for healthcare professionals. However, a significant challenge associated with ultrasound tomography is that the reconstructed images often contain noise. This noise can severely compromise the accuracy and interpretability of the diagnostic information derived from these images. In this paper, we propose and rigorously evaluate the application of a median filter to address and mitigate noise artifacts in the reconstructed images obtained through the distorted born iterative method (DBIM). The primary aim is to enhance the quality of these images and thereby improve diagnostic reliability. The effectiveness of our proposed noise reduction approach is quantitatively assessed using the normalized error evaluation metric, which provides a precise measure of improvement in image quality. Furthermore, to enhance the interpretability and utility of the reconstructed images, we incorporate a basic machine learning technique known as K-means clustering. This method is employed to automatically segment the reconstructed images into distinct regions that represent objects, background, and noise. Hence, it facilitates a clearer delineation of different components within the images. Our results demonstrate that K-means clustering, when applied to images processed with the proposed median filter method, effectively delineates these regions with a significant reduction of noise. This combination not only enhances image clarity but also ensures that critical diagnostic details are preserved and more easily interpreted by medical professionals. The substantial reduction in noise achieved through our approach underscores its potential for improving the accuracy and reliability of ultrasound tomography in medical diagnostics
To ensure public safety internet of things and convolutional neural network algorithm for a surveillance system enabled with 5G
Public safety and security are top priorities in the constantly urbanizing society and research develops and implements a smart surveillance system using fifth generation (5G) of wireless communication technology and internet of things (IoT) technologies to improve public safety. It developed a comprehensive and responsive monitoring solution using machine learning methods, especially convolutional neural networks (CNNs). IoT devices, including high-definition cameras, environmental sensors, and drones, are carefully deployed in urban centers, transit hubs, and essential infrastructure. These devices provide data to a central processing unit through the 5G network and CNNs analyze incoming data in real-time. The CNNs are taught to recognize objects, anomalies, faces, and license plates. These tasks help the system identify risks, odd activities, and intriguing people and warn authorities of real-time irregularities and security issues, simplifying emergency responses. Predictive analytics analyzes previous data to forecast security issues, enabling preventative steps and data are protected by strict privacy protections. According to this analysis, 5G-enabled IoT surveillance systems and machine learning may improve public safety, situational awareness, and emergency response times and approach ensures that security advancements respect privacy and integrity
An efficient direction oriented block-based video inpainting using morphological operations and adaptively dimensioned search region with direction-oriented block-based inpainting
Video inpainting is a technique in computer vision used to remove unwanted objects from video sequences while preserving visual consistency, so that modifications remain unnoticeable to the human eye. This paper presents an accurate video inpainting model based on the adaptively dimensioned search region with direction-oriented block-based inpainting (ADSR-DOBI) algorithm. The model operates in five main phases: preprocessing, background separation, morphological operations, object removal, and video inpainting. Initially, the input video is converted into frames, followed by preprocessing steps such as deionizing and resizing. These frames are then processed using a background subtraction module, where object localization and foreground detection are performed using the binomially distributed foreground segmentation network (BDFgSegNet) and morphological techniques. This results in segmented foreground objects tracked across frames. The object removal phase eliminates the identified foreground objects and defines the missing regions (holes) to be filled. The ADSR-DOBI algorithm is then applied to inpaint these regions seamlessly. Experimental results demonstrate that this approach outperforms existing state-of-the-art methods in both accuracy and efficiency
On design of a small-sized arrays for direction-of-arrival-estimation taking into account antenna gains
In the paper a technique for designing antenna arrays composed of directional elements for direction-of-arrival (DOA) estimation is proposed. Especially this approach is applied for developing hybrid antenna arrays with increased accuracy which features digital spatial spectral estimation after preliminary analog beamforming. The earlier obtained explicit formula for calculating the Cramér–Rao lower bound (CRLB) which determines the relationship between the variance of the DOA-estimation and antenna elements' radiation patterns, array geometry, has been used. Main idea of the proposed technique is that it takes into account spatial pattern and gain of the antenna elements. The high gain unlike the number of the antenna elements or interelement distance is the most important factor which allows reducing the value of the DOA-estimation errors. A couple of the examples of calculating radiation patterns of antenna elements improving accuracy of DOA-estimation with super-resolution are provided in the paper. Proposed antenna arrays are modeled according to the method of moments (MoM). The values of the root mean square error after the DOA-estimation are obtained. It is shown that the resulting hybrid systems can reduce the error value in DOA-estimation with super-resolution
Human motion classification by micro-doppler radar using intelligent algorithms
This article introduces a technique for detecting four human movements using micro-doppler radar and intelligent algorithms. Micro-doppler radar exhibits the capability to detect and measure object movements with intricate detail, even capturing complex or non-rigid motions, while accurately identifying direction, velocity, and motion patterns. The application of intelligent algorithms enhances detection efficiency and reduces false alarms by discerning subtle movement patterns, thereby facilitating more accurate detection and a deeper understanding of observed object dynamics. A continuous wave radar setup was implemented utilizing a spectrum analyzer and radio frequency (RF) generator capturing signals in a spectrogram centered at 2,395 MHz. Six models were assessed for image classification: VGG-16, VGG-19, MobileNet, MobileNet V2, Xception, and Inception V3. A dataset comprising 500 images depicting four movements-running, walking, arm raising, and jumping-was curated. Our findings reveal that the most optimal architecture in terms of training time, accuracy, and loss is VGG-16, achieving an accuracy of 96%. Furthermore, precision values of 96%, 100%, and 98% were obtained for the movements of walking, running, and arm raising, respectively. Notably, VGG-16 exhibited a training loss of 4.191E-04, attributed to the utilization of the Adam optimizer with a learning rate of 0.001 over 15 epochs and a batch size of 32
A novel multi-objective economic load dispatch solution using bee colony optimization method
This article presents a novel multi-objective economic load dispatch solution with the bee colony optimization method. The purposes of this research are to find the lowest total power generation cost and the lowest total power loss at the transmission line. A swarm optimization method was used to consider the non-smooth fuel cost function characteristics of the generator. The constraints of economic load dispatch include the cost function, the limitations of generator operation, power losses, and load demand. The suggested approach evaluates an IEEE 5, 26, and 118 bus system with 3, 6, and 15 generating units at 300, 1,263, and 2,630 megawatt (MW) and uses a simulation running on the MATLAB software to confirm its effectiveness. The outcomes of the simulation are compared with those of the exchange market algorithm, the cuckoo search algorithm, the bat algorithm, the hybrid bee colony optimization, the multi-bee colony optimization, the decentralized approach, the differential evolution, the social spider optimization, and the grey wolf optimization. It demonstrates that the suggested approach may provide a better-quality result faster than the traditional approach
Integration of web scraping, fine-tuning, and data enrichment in a continuous monitoring context via large language model operations
This paper presents and discusses a framework that leverages large-scale language models (LLMs) for data enrichment and continuous monitoring emphasizing its essential role in optimizing the performance of deployed models. It introduces a comprehensive large language model operations (LLMOps) methodology based on continuous monitoring and continuous improvement of the data, the primary determinant of the model, in order to optimize the prediction of a given phenomenon. To this end, first we examine the use of real-time web scraping using tools such as Kafka and Spark Streaming for data acquisition and processing. In addition, we explore the integration of LLMOps for complete lifecycle management of machine learning models. Focusing on continuous monitoring and improvement, we highlight the importance of this approach for ensuring optimal performance of deployed models based on data and machine learning (ML) model monitoring. We also illustrate this methodology through a case study based on real data from several real estate listing sites, demonstrating how MLflow can be integrated into an LLMOps pipeline to guarantee complete development traceability, proactive detection of performance degradations and effective model lifecycle management
A comparative study of deep learning-based network intrusion detection system with explainable artificial intelligence
In the rapidly evolving landscape of cybersecurity, robust network intrusion detection systems (NIDS) are crucial to countering increasingly sophisticated cyber threats, including zero-day attacks. Deep learning approaches in NIDS offer promising improvements in intrusion detection rates and reduction of false positives. However, the inherent opacity of deep learning models presents significant challenges, hindering the understanding and trust in their decision-making processes. This study explores the efficacy of explainable artificial intelligence (XAI) techniques, specifically Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME), in enhancing the transparency and trustworthiness of NIDS systems. With the implementation of TabNet architecture on the AWID3 dataset, it is able to achieve a remarkable accuracy of 99.99%. Despite this high performance, concerns regarding the interpretability of the TabNet model's decisions persist. By employing SHAP and LIME, this study aims to elucidate the intricacies of model interpretability, focusing on both global and local aspects of the TabNet model's decision-making processes. Ultimately, this study underscores the pivotal role of XAI in improving understanding and fostering trust in deep learning -based NIDS systems. The robustness of the model is also being tested by adding the signal-to-noise ratio (SNR) to the datasets
Q-learning based active monitoring with weighted least connection round robin load balancing principle for serverless computing
Serverless computing is considered one of the most promising technologies for real-time applications, with function as a service (FaaS) managing service requests in serverless computing. Load balancing played a vital role in assigning tasks in serverless computing for customers; user requests were controlled by load balancing algorithms and managed using machine learning techniques to deliver results and performance metrics within specified time limits. All serverless computing applications aimed to achieve optimal performance based on the most effective load balancing techniques, which directed requests to the appropriate servers in a timely manner. This research focused on developing a novel Q-learning based active monitoring with least connection round robin load balancing principle (Q-LAMWLR LB) for serverless computing to address the aforementioned challenge. Also, aimed to intelligently assign requests to serverless computing based on the number of requests arriving at the load balancer and how intelligently they could be directed to the appropriate server. This work utilized standard techniques to calculate the average response time for each scheduling algorithm and develop a novel intelligent load-balancing technique in serverless computing. Required experiment were conducted and the results are giving the improvement as compared to other load balancing principles. The further research in this area also identified and presented
Optimizing switching states using a current predictive control algorithm for multilevel cascaded H-bridge converters in solar photovoltaic integration into power grids
Solar power is the best solution for renewable energy sources. Nowadays, solar power plants are invested and developed strongly in many places. Converting direct current (DC) energy from photovoltaic (PV) systems to the alternating current (AC) grid is critical to widely use this power source at high voltage levels. This paper presents an algorithm to optimize the valve-switching process for a cascading H-bridge multilevel converter (CHB) to convert energy from a PV system connected to the grid. This is done by a model predictive control algorithm (MPC) before a valve switching cycle, its process will be carried out in future forecast cycles and applied in the present time. From there, choose the best switching state for a working cycle. This will ensure the best quality of current and voltage with a low total harmonic distortion (THD) index to connect to the power grid. This method's advantages are reducing volume calculation for the controller, Selecting the most suitable valve switching state to achieve low valve switching frequency, reducing losses, and improving conversion efficiency. The implementation results are proven by simulation and evaluation of results on MATLAB-Simulink software