Jurnal Ilmu Komputer dan Informasi
Not a member yet
237 research outputs found
Sort by
Multilabel Hate Speech Classification in Indonesian Political Discourse on X using Combined Deep Learning Models with Considering Sentence Length
Hate speech, as public expression of hatred or offensive discourse targeting race, religion, gender, or sexual orientation, is widespread on social media. This study assesses BERT-based models for multi-label hate speech detection, emphasizing how text length impacts model performance. Models tested include BERT, BERT-CNN, BERT-LSTM, BERT-BiLSTM, and BERT with two LSTM layers. Overall, BERT-BiLSTM achieved the highest (82.00%) and best performance on longer texts (83.20% ) with high and , highlighting its ability to capture nuanced context. BERT-CNN excelled in shorter texts, achieving the highest (79.80%) and an of 79.10%, indicating its effectiveness in extracting features in brief content. BERT-LSTM showed balanced and across text lengths, while BERT-BiLSTM, although high in r, had slightly lower on short texts due to its reliance on broader context. These results highlight the importance of model selection based on text characteristics: BERT-BiLSTM is ideal for nuanced analysis in longer texts, while BERT-CNN better captures key features in shorter content
Efficient Design and Compression of CNN Models for Rapid Character Recognition
Convolutional Neural Networks (CNNs) are extensively utilized for image processing and recognition tasks; however, they often encounter challenges related to large model sizes and prolonged training times. These limitations present difficulties in resource-constrained environments that require rapid model deployment and efficient computation. This study introduces a systematic approach to designing lightweight CNN models specifically for character recognition, emphasizing the reduction of model complexity, training duration, and computational costs without sacrificing performance. Techniques such as hyperparameter tuning, model pruning, and post-training quantization (PTQ) are employed to decrease model size and enhance training speed. The proposed methods are particularly well-suited for deployment on edge computing platforms, such as Raspberry Pi, or embedded systems with limited resources. Our results demonstrate a reduction of over 80% in model size, decreasing from 43.73 KB to 6.25 KB, and a reduction of more than 45% in training time, decreasing from over 150 seconds to less than 80 seconds. This research highlights the potential for achieving a balance between efficiency and accuracy in CNN design for real-world deployment, addressing the increasing demand for streamlined deep learning models in resource-constrained environments
Preprocessing Impact on SAR Oil Spill Image Segmentation Using YOLOv8
Synthetic Aperature Radar (SAR) is a sensory equipment used in marine remote sensing that emits radio waves to capture a representation of the target scene. SAR images have poor quality, one of which is due to speckle noise. This research uses SAR images containing oil spills as objects that are detected using machine learning with the YOLOv8 model. The dataset was obtained from MKLab by preprocessing to improve the quality of SAR images before processing. Preprocessing involves annotating the dataset, augmenting it with flip augmentation, and filtering it using threshold and median filters in addition to a sharpen kernel that finds the optimal midway value. The default value of the YOLOv8 hyperparameter is used with addition of delta as well as subtraction of the same delta.
The implementation of preprocessing and combination of hyperparameters is examined to optimize the YOLOv8 model in detecting oil spills in SAR images. Based on 10 experimental scenarios, initial results with the original MKLab image provide an mAP50 of 49.7%. Implementing Flip augmentation alone on the data set increases the mAP50 value by 18.8%. Followed by the sharpen 1.2 kernel filter increasing the mAP50 value to 68.89%, while the median and thresholding filters tend to reduce the mAP50 value. The combination of experiments with the best results was preprocessing with flip augmentation and sharpen 1.2 kernel filter with hyperparameters: epoch 200, warmup 4.0, momentum 0.9, warmup bias lr 0.01, weight decay 0.005, and learning rate 0.000714, resulting in an mAP50 value of 68.89%. In addition, it was found that the sharpening kernel with a real number midpoint of 1.2 and combination with flipping augmentation had the greatest impact on increasing the MAP50 value in SAR oil spill image segmentation by YOLOv8
Hand Sign Interpretation through Virtual Reality Data Processing
The research lays the groundwork for further advancements in VR technology, aiming to develop devices capable of interpreting sign language into speech via intelligent systems. The uniqueness of this study lies in utilizing the Meta Quest 2 VR device to gather primary hand sign data, subsequently classified using Machine Learning techniques to evaluate the device's proficiency in interpreting hand signs. The initial stages emphasized collecting hand sign data from VR devices and processing the data to comprehend sign patterns and characteristics effectively. 1021 data points, comprising ten distinct hand sign gestures, were collected using a simple application developed with Unity Editor. Each data contained 14 parameters from both hands, ensuring alignment with the headset to prevent hand movements from affecting body rotation and accurately reflecting the user's facing direction. The data processing involved padding techniques to standardize varied data lengths resulting from diverse recording periods. The Interpretation Algorithm Development involved Recurrent Neural Networks tailored to data characteristics. Evaluation metrics encompassed Accuracy, Validation Accuracy, Loss, Validation Loss, and Confusion Matrix. Over 15 epochs, validation accuracy notably stabilized at 0.9951, showcasing consistent performance on unseen data. The implications of this research serve as a foundation for further studies in the development of VR devices or other wearable gadgets that can function as sign language interpreters
E-Government Between Developed and Developing Countries: Key Perspectives from Denmark and Iraq
E-government involves using technology to provide public information and services digitally. This study examines key factors addressing infrastructure, cultural, political, technical, and social challenges in e-government implementation. By exploring diverse contexts, from citizen engagement to data frameworks, it will elucidate best practices and lessons for overcoming hurdles on the bureaucratic and user sides. The research aims to uncover how states can successfully transition services online. Insights can inform policymakers seeking to digitize governance and leverage information and communication technologies to improve state-citizen relations. Additionally, it aims to compare and analyze e-government systems in a developing country (Iraq) and a developed country (Denmark) to highlight key differences that could inform e-government development efforts in developing nations. Iraq and Denmark were chosen due to the disparity between their e-government systems, enabling the identification of weaknesses in Iraq's e-government initiatives and providing insights from Denmark's more advanced experience. Examining this e-government gap between a developing and developed country will allow developing nations like Iraq to pinpoint areas for improvement and potentially benefit from Denmark's success in this area
Code Generator Development to Transform IFML (Interaction Flow Modelling Language) into a React-based User Interface
Model-Driven Software Engineering (MDSE) is a software development approach that uses the Model to be the main actor of the development. MDSE can be applied to User Interface (UI) Development so that a model for the UI can be built, and then a transformation can be made to turn it into a running application. In this research, we develop UI Generator to support UI Development with the MDSE approach. This UI Generator can also support UI Development in Software Product Line Engineering (SPLE) paradigm. The UI is modeled with Interaction Flow Modeling Language (IFML) diagram. Then The IFML diagram is transformed into React-Based UI by the UI Generator. The UI Generator is developed with Acceleo on Eclipse IDE to transform IFML into React Code with the transformation rules defined in this research. The UI generator is also enriched with display settings and static page management to address user customization needs. The experimental results show that the UI Generator can generate a functional website. Besides evaluating the working product, UI Generator is evaluated qualitatively well based on six quality criteria as an SPLE supporting tool
Improving Remote Sensing Change Detection Via Locality Induction on Feed-forward Vision Transformer
The main objective of Change Detection (CD) is to gather change information from bi-temporal remote sensing images. The recent development of the CD method makes use of the recently proposed Vision Transformer (ViT) backbone. Despite ViT being superior to Convolutional Neural Networks (CNN) at modeling long-range dependencies, ViT lacks a locality mechanism, a critical property of pixels that comprise natural images, including remote sensing images. This issue leads to segmentation artifacts such as imperfect changed region boundaries on the predicted change map. To address this problem, we propose LocalCD, a novel CD method that imposes the locality mechanism into the Transformer encoder. Particularly, it replaces the Transformer's feed-forward network using an efficient depth-wise convolution between two convolutions. LocalCD outperforms ChangeFormer by a significant margin. Specifically, it achieves an F1-score of 0.9548 and 0.9243 on CDD and LEVIR-CD datasets
The Optimizing Data Quality in Interagency Data Sharing: A Framework
In the modern landscape of government operations, characterized by a shift towards openness, inclusivity, and interagency collaboration driven by the pursuit of public value and evidence-based policy making, the importance of interagency data sharing (IDS) is unmistakable. Despite the evident benefits of information exchange among government agencies, challenges persist, especially concerning nuanced considerations of data quality. This study aims to bridge this critical gap by proposing a specialized framework for IDS within government agencies. This framework, crafted to proactively address data quality considerations throughout the entire lifecycle, transcends traditional approaches and seeks to offer insights for fostering effective practices in interagency data sharing. Positioned at the nexus of evolving government operations, the research underscores the necessity for strategic frameworks prioritizing data quality to support collaborative and effective evidence-driven decision-making
Note on Algorithmic Investigations of Juosan Puzzles
We investigate several algorithmic and mathematical aspects of the Juosan puzzle—a one-player pencil-and- paper puzzle introduced in 2014 and proven NP-complete in 2018. We introduce an optimized backtracking technique for solving this puzzle by considering some invalid subgrid configurations and show that this algorithm can solve an arbitrary Juosan instance of size m × n in O(2mn) time. A C++ implementation of this algorithm successfully found the solution to all Juosan instances with no more than 300 cells in less than 15 seconds. We also discuss the special cases of Juosan puzzles of size m × n where either m or n is less than 3. We show that these types of puzzles are solvable in linear time in terms of the puzzle size and establish the upper bound for the number of solutions to the Juosan puzzle of size 1 × n. Finally, we prove the tractability of arbitrary m × n Juosan puzzles whose all territories do not have constraint numbers
Detecting Type and Index Mutation in Cancer DNA Sequence Based on Needleman–Wunsch Algorithm
Detecting DNA sequence mutations in cancer patients contributes to early identification and treatment of the disease, which ultimately enhances the effectiveness of treatment. Bioinformatics utilizes sequence alignment as a powerful tool for identifying mutations in DNA sequences. We used the Needleman-Wunsch algorithm to identify mutations in DNA sequence data from cancer patients. The cancer sequence dataset used includes breast, cervix uteri, lung, colon, liver and prostate cancer. Various types of mutations were identified, such as Single Nucleotide Variant (SNV)/substitution, insertion, and deletion, locate by the nucleotide index. The Needleman Wunch algorithm can detect type and index mutation with the average F1-scores 0.9507 for all types of mutations, 0.9919 for SNV, 0.7554 for insertion, and 0.8658 for deletion with a tolerance of 5 bp. The F1-scores obtained are not correlated with gene length. The time required ranges from 1.03 seconds for a 290 base pair gene to 3211.45 seconds for a gene with 16613 base pairs