JOIV : International Journal on Informatics Visualization
Not a member yet
    786 research outputs found

    Detection of Keratitis in the Cornea by Developing an Active Contour Method Based on Contrast Features

    Get PDF
    Digital Image Processing (DIP) is a scientific discipline that uses computer image processing techniques. The object of this research is keratitis on the cornea. The image of keratitis is obtained using a slit lamp at Padang Aye Center (PAC) Hospital, based on the results of the diagnosis, namely by looking at the development of the infiltrate or also called hypopyon, measuring the ulcer borders horizontally and vertically to evaluate improvement or response to the treatment given. The clinical results cannot determine the extent and circumference of the keratitis layer area that responds to treatment in the corneal area. The images used were 206 slit lamp images of keratitis. This research provides knowledge in the form of contrast values in the Active Contour method, resulting in an update called Active Contour Contrast Adjustment (ACCA) in correctly segmenting keratitis objects and providing measurements of the area and perimeter of the keratitis area. Overall. The research results from 206 slit lamp images, 195 slit lamp images of keratitis could detect keratitis correctly, and eleven slit lamp images of keratitis could not be detected, resulting in an accuracy of 94.66%. Meanwhile, the standard Active Contour accuracy was not detected at all or 100% undetected. Based on 11 images not detected using the (ACCA) method from 206 images, an accuracy of 5.33% was obtained. So, the results obtained are outstanding and can be used as a reference for medical personnel

    Secondary Structure Protein Prediction-based First Level Features Extraction Using U-Net and Sparse Auto-encoder

    Get PDF
    Protein secondary structure prediction (PSSP) is an important challenge in bioinformatics. Existing methods for PSSP are generally divided into three categories: neighbor-based, model-based, and meta-estimator-based methods, each using supervised or supervised learning methods model-based are often neural networks, hidden Markov models are available; they support vector machines and other machine learning techniques based on multiple sequence alignments and evolutionary data from increasingly large protein databases. This paper presents a powerful machine learning approach for PSSP, which is a new feature extraction method using sparse autoencoders to identify new protein features. The sparse autoencoder efficiently identifies new features in the training data and provides an accurate prediction of occurrences. Two machine learning methods are used: unsupervised learning methods based on sparse auto-encoders and semi-supervised learning methods using deep learning methods. Experimental results show that the deep learning method gets 86.719% accuracy on the test set, while the unsupervised pretraining method gets 85.853% accuracy on the training set after being improved by surface propagation. Fine-tuning and layer-wise pretraining significantly improve the performance of the proposed model. The results show that the deep learning method achieves an accuracy of 86.7% in the training set and 71.4% in the test set. In comparison, Sparse Autoencoders alone achieved an accuracy of 67%, demonstrating the effectiveness of the combination of these methods. This study highlights the role of advanced deep learning techniques in PSSP accuracy. Future research should consider using big data, exploring deep learning algorithms, and refining optimization methods to further encourage predictive performance in bioinformatics

    A Review of Livestock Smart Farming for Sustainable Food Security

    Get PDF
    Maintaining food security via sustainable farming methods is a significant problem as the global population grows. This study aims to examine the impact of smart farming methods on enhancing farm animal output to satisfy rising demand while fostering sustainability. Smart livestock farming incorporates automation, Internet of Things (IoT) sensors, and machine learning algorithms to improve production, efficiency, and resource utilization. With an emphasis on essential factors including automated feeding, environmental monitoring, and health tracking, this study takes a methodical approach to reviewing IoT-based livestock farming. The efficiency of several sensor technologies, including motion, temperature, humidity, and biometric sensors, is examined in gathering data and making decisions in real time. The potential of machine learning methods like pattern identification, anomaly detection, and predictive analytics to maximize the production and health of farm animals is assessed. According to the results, IoT-driven livestock farming improves illness diagnosis, minimizes resource waste, and optimizes feeding practices, increasing production efficiency. These developments minimize the impact on the environment while promoting steady food production. Additionally, less human interference results from automation in livestock production, which lowers costs and improves decision-making. This study demonstrates how smart agricultural technology may be used to address issues related to food security. Further research is needed to increase real-time data processing, hone machine learning models, and investigate affordable options for broadly adopting these ideas into practice. Livestock management may be transformed, guaranteeing a robust and sustainable agricultural environment

    Overview of Software Re-Engineering Concepts, Models and Approaches

    Get PDF
     Legacy systems face issues such as integrating new technology, fulfilling new requirements in the ever-changing environment, and meeting new user expectations. Due to the old complex system structure and technology, modification is hardly applied. Therefore, re-engineering is needed to change the system to meet new requirements and adapt to new technology. Software re-engineering generally refers to creating a new system from the existing one. Software re-engineering is divided into three (3) main phases: reverse engineering alteration and forward engineering. Reverse engineering examines, analyzes, and understands the legacy system in deriving the abstract representation of a legacy system; then, through necessary alterations such as restructuring, recording, and a series of forward engineering processes, a new system is built. This paper introduces the concepts of software re-engineering, including the challenges, benefits, and motivation for re-engineering. In addition, beginning with the traditional model of software re-engineering, this paper provides an overview of other models that provide different processes of software re-engineering. Each model has its unique set of processes for performing software re-engineering. Furthermore, re-engineering approaches show various ways of performing software re-engineering. Software re-engineering is a complex process that requires knowledge, tools, and techniques from different areas such as software design, programming, testing, et cetera. Therefore, monitoring the re-engineering process to meet the expectations is necessary

    A Comprehensive Review of Cyber Hygiene Practices in the Workplace for Enhanced Digital Security

    Get PDF
    In today's digital age, cybercrime is increasing at an alarming rate, and it has become more critical than ever for organizations to prioritize adopting best practices in cyber hygiene to safeguard their personnel and resources from cyberattacks. As personal hygiene keeps one clean and healthy, cyber hygiene combines behaviors to enhance data privacy. This paper aims to explore the common cyber-attacks currently faced by organizations and how the different practices associated with good cyber hygiene can be used to mitigate those attacks. This paper also emphasizes the need for organizations to adopt good cyber hygiene techniques and, therefore, provides the top 10 effective cyber hygiene measures for organizations seeking to enhance their cybersecurity posture. To better evaluate the cyber hygiene techniques, a systematic literature approach was used, assessing the different models of cyber hygiene, thus distinguishing between good and bad cyber hygiene techniques and what are the cyber-attacks associated with bad cyber hygiene that can eventually affect any organization. Based on the case study and surveys done by the researchers, it has been deduced that good cyber hygiene techniques bring positive behavior among employees, thus contributing to a more secure organization. More importantly, it is the responsibility of both the organization and the employees to practice good cyber hygiene techniques. Suppose organizations fail to enforce good cyber hygiene techniques, such as a lack of security awareness programs. In that case, employees may have the misconception that it is not their responsibility to contribute to their security and that of the organization, which consequently opens doors to various cyber-attacks. There have not been many research papers on cyber hygiene, particularly when it comes to its application in the workplace, which is a fundamental aspect of our everyday life. This paper focuses on the cyber hygiene techniques that any small to larger organization should consider. It also highlights the existing challenges associated with the implementation of good cyber hygiene techniques and offers potential solutions to address them

    Lecopelese - a Novel Evaluation Model for Measuring Educational Aspects of Game-based Learning

    No full text
    This study aimed to establish a model for assessing the pedagogical quality of mobile game-based learning (GBL), which seeks to convey educational content to users. Evaluating the educational effectiveness of GBL necessitates a robust model tailored for this purpose. Current models can be improved to better address various educational challenges associated with mobile GBL. The LECOPELESE (LEarning COntent, PEdagogy and LEarning StyLE) model was developed by integrating relevant constructs identified in existing literature. To validate this model, a qualitative research approach was employed, drawing a sample from 270 undergraduate students. The analysis utilized Structural Equation Modeling (SEM) and resulted in a final model based on rigorous factor analysis. The findings indicated that the proposed model effectively measures educational quality in game-based learning. This new model includes more comprehensive constructs and items, addressing the educational aspects of game-based learning. Specifically, the model introduces a pedagogy construct to evaluate game-based learning quality, reflecting criteria for outstanding educational content and delivery through mobile applications. It assesses how effectively GBL provides real-world learning experiences. Additionally, the research highlights that the quality of pedagogy is influenced by two key factors: the GBL's ability to accommodate learners' unique characteristics (learning styles) and the quality of the learning content that adapts to learners' needs. Ultimately, the study demonstrates that both learning content and style significantly impact the pedagogy construct, suggesting that enhancing these areas can improve the overall pedagogical quality of game-based learning

    Multi-Head Voting based on Kernel Filtering for Fine-grained Visual Classification

    Get PDF
    Research on Fine-Grained Visual Classification (FGVC) faces a significant challenge in distinguishing objects with subtle differences within intra-class variations and inter-class similarities, which are critical for accurate classification. To address this complexity, many advanced methods have been proposed using feature coding, part-based components for modification, and attention-based efforts to facilitate different classification phases. Vision Transformers (ViT) has recently emerged as a promising competitor compared to other complex methods in FGVC applications for image recognition, which are mainly capable of capturing more fine-grained details and subtle inter-class differences with higher accuracy. While these advances have shown improvements in various tasks, existing methods still suffer from inconsistent learning performance across heads and layers in the multi-head self-attention (MHSA) mechanisms that result in suboptimal classification task performance. To enhance the performance of ViT, we propose an innovative approach that modifies the convolutional kernel.  Our method considerably improves the method's capacity to identify and highlight specific crucial characteristics required for classification by using an array of kernels. Experimental results show kernel sharpening outperforms other state-of-the-art approaches in improving accuracy across numerous datasets, including Oxford-IIIT Pet, CUB-200-2011, and Stanford Dogs. Our findings show that the suggested approach improves the method's overall performance in classification tasks by achieving more concentration and precision in recognizing discriminative areas inside pictures. Using kernel adjustments to improve Vision Transformers' ability to differentiate somewhat complicated visual features, our strategy offers a strong response to the problem of fine-grained categorization

    Levenshtein Distance Algorithm in Javanese Character Translation Machine Based on Optical Character Recognition

    Get PDF
    Indonesia has diverse art, cultures, and languages. Linguistically, Indonesia has many local languages, which makes it a diverse country, with Javanese being the regional language with the highest number of entries in the Kamus Besar Bahasa Indonesia. The Javanese script, one of the cultural symbols of Java, differs significantly from the Latin script commonly used in daily communication. In the context of cultural preservation, which is also one of the ministry's strategic steps, a translation or transfer process is needed from the Javanese script to the Latin script to the Indonesian language as an active participation in culture, with technology helping promote and introduce Indonesian culture. This study develops an algorithm-based approach to capture data images and improve translation accuracy. Transliteration is further enhanced by incorporating optical character recognition to convert character images. The study also applies a convolutional neural network (CNN) algorithm for character image recognition and a Levenshtein distance algorithm to translate Latin characters into Indonesian. The convolutional neural network (CNN) algorithm achieved an optimal % image detection accuracy of 95% at the 21st epoch. The translation process yielded a 90% word-level translation accuracy and 70% sentence-level accuracy. These results indicate that sentence translation remains suboptimal due to a lack of sufficient training data and similarities between scripts, highlighting the need for further improvements through transformer models or data augmentation

    A Review of Technology Acceptance Model Application in User Acceptance of Autonomous Vehicles

    Get PDF
    Touting the merits of reducing traffic fatalities by eliminating human errors and promoting social benefits, autonomous vehicle technology has been rapidly developed over the last few decades. Using the systematic review approach, this study provides an overview of current studies that apply the Technology Acceptance Model (TAM) in shaping user acceptance of autonomous vehicles. Based on a set of inclusion and exclusion criteria, a total of 16 articles out of 792,364 articles were retained for further analysis. The factors that have garnered the most attention from researchers are the technical and psychological factors, with the most frequent constructs integrated in these studies being perceived ease of use, followed by perceived usefulness, trust, attitude, perceived enjoyment, and perceived innovativeness. This study presents three key findings. The first shows that 36 potential antecedents influencing AV adoption were incorporated into TAM. Excluding the baseline model antecedents, the most studied factors were trust, personal innovativeness, and perceived enjoyment. Trust is widely recognized as a crucial factor in AV adoption and requires longitudinal research, as it is a dynamic element that evolves. The second finding concerns the various causal relationships between the constructs. The results showed mixed outcomes, which may be due to differences in levels of automation examined, geographical contexts, and socio-demographic factors. Third, the gaps identified in Section 4 can guide policymakers, researchers, and automakers in developing effective future strategies and research directions. Ultimately, a deeper understanding of public acceptance can facilitate the safe deployment of AVs on roads, leading to benefits such as improved traffic safety and increased sustainability for society and the economy. 

    Identification of Indonesian Rupiah Paper Currency Denominations Using First Order Statistical Feature Extraction and k-Nearest Neighbor

    Get PDF
    The rupiah currency is legal tender in Indonesia and issued by Bank Indonesia. The paper rupiah currency has undergone many changes. Although each paper rupiah has its characteristics, errors can occur when distinguishing the value of paper rupiah between denominations. With the rapid development of image processing that can be utilized in human life, the problem of errors in distinguishing the value of paper rupiah can be overcome with image processing techniques, where paper rupiah data can be identified using these techniques. This study focuses on the feature extraction and classification process. The dataset used in this study is the image of the paper rupiah with the 2022 emission. The image extraction process uses First-Order Statistical Characteristic Extraction to obtain the characteristics of each image object. In addition, K-Nearest Neighbor (KNN) is used to classify paper rupiah denominations. The accuracy, sensitivity, and specificity values are calculated to indicate the level of success of the test data compared to the training data. The testing process is carried out with k values of 1, 3, 5, and 7 on the front and back sides of the currency for all datasets. The highest accuracy value was obtained when k = 3. This test produced an average Accuracy value of 92.08%, an average Sensitivity value of 64.22%, and an average Specificity value of 96.61%. This research can be developed for more affordable counterfeit money detection, such as smartphone applications or portable devices that the general public can use

    748

    full texts

    786

    metadata records
    Updated in last 30 days.
    JOIV : International Journal on Informatics Visualization
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇