International Journal of Informatics and Communication Technology (IJ-ICT)
Not a member yet
450 research outputs found
Sort by
Quality of service optimization for 4G LTE upload and download throughput
Demand for mobile data services and people’s dependence on 4G LTE networks continue to increase. However, the quality of service (QoS) of this network still requires improvement, especially regarding the effect of QoS on throughput at specific frequencies. The research gap lies in the lack of indepth analysis of the impact of QoS parameters on network performance at frequencies of 2,100 MHz and 2,300 MHz. This study evaluates the effect of QoS parameters, such as delay, jitter, and packet loss, on throughput in 4G LTE networks at both frequencies. The research methodology uses an experimental approach with throughput, delay, jitter, and packet loss measurements in various network conditions. The results showed that delay (17.2174 ms to 37.0322 ms), jitter, and packet loss significantly influence throughput, ranging from 624.5 Kbps to 1,322.4 Kbps. The 2,100 MHz frequency tends to show better performance than 2,300 MHz. This study concludes that optimizing QoS parameters, especially delay and jitter, can significantly improve 4G LTE network performance. These findings provide practical contributions for mobile operators in improving network quality and customer satisfaction and open opportunities for further research on other frequencies or newer network technologies
A system architecture for mixed reality systems in vocational schools in Indonesia
In Indonesia, vocational schools are less favored compared to K -12 schools. Unfortunately, graduates from vocational schools do not fulfill the minimum requirements set by industries, particularly in the current era of industry revolution 5.0. This revolution aims to establish society 5.0, where humans and robots collaborate closely to achieve improved work outcomes. One technique to enhance the proficiency of graduates and prepare them for the workforce is by implementing a mixed-reality system. that will effectively address a multitude of issues and significantly enhance the caliber of graduates and before the implementation of mixed reality (MR) systems, it is necessary to create system architecture diagrams to ensures that the system can be utilized not only in specific schools but also in any vocational school in Indonesia. This study comprises 5 participants, including experts from both the professional and academic fields, who possess extensive knowledge in the domains of metaverse, MR systems, and information systems. The methodology employed in this study draws inspiration from James Martin’s rapid application development (RAD). The result of this study is a validated system architectural diagram, endorsed by experts, which depicts a metaverse-based MR system designed specifically for vocational schools in Indonesi
Techniques of deep learning neural network-based building feature extraction from remote sensing images: a survey
Recently, due to earthquake disaster, many people have lost their lives and homes, and not able to settle to new locations immediately. Therefore, a framework or a plan should be ready to immediately relocate the people to different locations or do resettlement. Much research has been done in this field but still there are problems of identifying clear building boundaries, rectangular houses, due to the problem of different shapes of the buildings. These techniques were explored for identification of clear building boundaries, rectangular houses, buildings which are more highlighted and smaller size buildings for pre-disaster and post-disaster building extraction scenarios. In this survey of building extraction techniques, most of the approach is training the network, second approach is refining the trained output features, running the trained samples on the predefined models of neural network. Several issues and their assessment are studied in these techniques. These are beneficial to the various researchers for different building extractions
Srvycite: a hybrid scientific article recommendation system
A recommendation system is becoming part of every work done today to reduce the effort of work done by the users in searching for items in need by recommending new items that may be useful. This theme has also been used in research article recommendation systems for recommending articles of interest to researchers from a bulk of digital research documents spread across different databases on the internet. To ease the task of this article recommendation process, we have proposed a novel approach, Srvycite, by utilizing the survey article citation network along with the original research article network. The purpose of utilizing the survey article citation network is to detect the most influential articles that are considered to be important by other researchers in the same field. The Srvycite approach utilizes the text and meta features of articles to recommend papers. To preprocess the text features utilized, we have employed Word2Vec and bidirectional encoder representations from transformers (BERT) for vectorization. Then citation graph and survey citation graphs are generated to find the most influential nodes. The weighted text similarity score is finally computed by combining the cited by values and the text similarity score from the citation and survey citation graph to list articles as recommendations for the user. This system is proven to increase the accuracy of the article recommendation by 3.8 and 2.1 in the case of the precision and recall measures for performance evaluation
A hybrid framework for enhanced intrusion detection in cloud environments leveraging autoencoder
In today’s world, the significance of network security and cloud environments has grown. The rising demand for data transmission, along with the versatility of cloud-based solutions and widespread availability of global resources, are key drivers of this growth. In response to rapidly evolving threats and malicious attacks, developing a robust intrusion detection system (IDS) is essential. This study addresses the imbalanced data and utilizes an unsupervised learning approach to protect network data. The suggested hybrid framework employs the CIC-IDS2017 dataset, integrating methods for handling imbalanced data with unsupervised learning to enhance security. Following preprocessing, principal component analysis (PCA) reduces the dimensionality from eighty features to twenty-three features. The extracted features are input into density-based spatial clustering of applications with noise (DBSCAN), a clustering algorithm. particle swarm optimization (PSO) optimizes DBSCAN, grouping similar traffic and enhancing classification. To address the imbalances in the learning process, the autoencoder (AE) algorithm demonstrates unsupervised learning. The data from the cluster is input into the AE, a deep learning algorithm, which classifies traffic as normal or an attack. The proposed approach (PCA+DBSCAN+AE) attains remarkable intrusion detection accuracy exceeding 98%, and outperforms five contemporary methodologies
Enhanced n-party Diffie Hellman key exchange algorithm using the divide and conquer algorithm
Cryptographic algorithms guarantee data and information security via a communication system against unauthorized users or intruders. Numerous encryption techniques have been employed to safeguard this data and information from hackers. By supplying a distinct shared secret key, the n-party Diffie Hellman key exchange approach has been used to protect data from hackers. Using a quadratic time complexity, the n-party Diffie-Hellman method is slow when multiple users use the cryptographic key interchange system. To solve this issue, the researchers created an effective shared hidden key for the n-party Diffie Hellman key exchange of a cryptographic system using the divide-and-conquer strategy. The current research recommends the use of the divide and conquer algorithm, which breaks down the main problem into smaller subproblems until it reaches the base solution, which is then merged to generate the solution of the main problem. The comparative analysis indicates that the developed system generates a shared secret key faster than the current n-party Diffie Hellman system
Advanced optimization load frequency control for multi - islanded micro grid system with tie-line loading by using PSO
This manuscript presents the design of a microgrid featuring solar and wind as uncontrollable energy sources, alongside controllable sources like batteries and a diesel generator, aiming to address power supply variations resulting from load fluctuations. Controllers are imperative to mitigate these challenges, and the manuscript emphasizes the need for precise tuning of gain values for optimal electrical energy utilization. In lieu of the trial-and-error approach, particle swarm optimization (PSO) is employed for enhanced steady-state response in the Microgrid. The study also introduces the application of proportional-integral (PI), proportional-integral-derivative (PID), and PID with feed forward (PIDF) controllers to effectively address and resolve identified issues ensuring improved system performance and consistent power supply stability in the microgrid system
Unit commitment problem solved with adaptive particle swarm optimization
This article presents an innovative approach that solves the problem of generation scheduling by supplying all possible operating states for generating units for the given time schedule over the day. The scheduling variables are set up to code the load demand as an integer each day. The proposed adaptive particle swarm optimization (APSO) technique is used to solve the generation scheduling issue by a method of optimization considering production as well as transitory costs. The system and generator constraints are considered when solving the problem, which includes minimum and maximum uptime and downtime as well as the amount of energy produced by each producing unit (like capacity reserves). This paper describes the suggested algorithm that can be applied for unit commitment problems with wind and heat units. Test systems with 26 and 10 units are used to validate the suggested algorithm
AI-based federated learning for heart disease prediction: a collaborative and privacy-preserving approach
People with symptoms like diabetes, high BP, and high cholesterol are at an increased risk for heart disease and stroke as they get older. To mitigate this threat, predictive fashions leveraging machine learning (ML) and artificial intelligence (AI) have emerged as a precious gear; however, heart disease prediction is a complicated task, and diagnosis outcomes are hardly ever accurate. Currently, the existing ML tech says it is necessary to have data in certain centralized locations to detect heart disease, as data can be found centrally and is easily accessible. This review introduces federated learning (FL) to answer data privacy challenges in heart disease prediction. FL, a collaborative technique pioneered by Google, trains algorithms across independent sessions using local datasets. This paper investigates recent ML methods and databases for predicting cardiovascular disease (heart attack). Previous research explores algorithms like region-based convolutional neural network (RCNN), convolutional neural network (CNN), and federated logistic regressions (FLRs) for heart and other disease prediction. FL allows the training of a collaborative model while keeping patient info spread out among various sites, ensuring privacy and security. This paper explores the efficacy of FL, a collaborative technique, in enhancing the accuracy of cardiovascular disease (CVD) prediction models while preserving data privacy across distributed datasets
Multilingual hate speech detection using deep learning
The rise of social media has enabled public expression but also fueled the spread of hate speech, contributing to social tensions and potential violence. Natural language processing (NLP), particularly text classification, has become essential for detecting hate speech. This study develops a hate speech detection model on Twitter using FastText with bidirectional long short-term memory (Bi-LSTM) and explores multilingual bidirectional encoder representations from transformers (M-BERT) for handling diverse languages. Data augmentation techniques-including easy data augmentation (EDA) methods, back translation, and generative adversarial networks (GANs)-are employed to enhance classification, especially for imbalanced datasets. Results show that data augmentation significantly boosts performance. The highest F1-scores are achieved by random insertion for Indonesian (F1-score: 0.889, Accuracy: 0.879), synonym replacement for English (F1-score: 0.872, Accuracy: 0.831), and random deletion for German (F1-score: 0.853, Accuracy: 0.830) with the FastText + Bi-LSTM model. The M-BERT model performs best with random deletion for Indonesian (F1-score: 0.898, Accuracy: 0.880), random swap for English (F1 score: 0.870, Accuracy: 0.866), and random deletion for German (F1-score: 0.662, Accuracy: 0.858). These findings underscore that data augmentation effectiveness varies by language and model. This research supports efforts to mitigate hate speech’s impact on social media by advancing multilingual detection capabilities