Computer Science and Information Technologies (E-Journal)
Not a member yet
    136 research outputs found

    Bibliometric analysis and short survey in CT scan image segmentation: identifying ischemic stroke lesion areas

    Get PDF
    Ischemic stroke remains one of the leading causes of mortality and long-term disability worldwide. Accurate segmentation of brain lesions plays a crucial role in ensuring reliable diagnosis and effective treatment planning, both of which are essential for improving clinical outcomes. This paper presents a bibliometric analysis and a concise review of medical image segmentation techniques applied to ischemic stroke lesions, with a focus on tomographic imaging data. A total of 2,014 publications from the Scopus database (2013–2023) were analyzed. Sixty key studies were selected for in-depth examination: 59.9% were journal articles, 29.9% were conference proceedings, and 4.7% were conference reviews. The year 2023 marked the highest volume of publications, representing 17% of the total. The most active countries in this area of research are China, the United States, and India. "Image segmentation" emerged as the most frequently used keyword. The top-performing studies predominantly used pre-trained deep learning models such as U-Net, ResNet, and various convolutional neural networks (CNNs), achieving high accuracy. Overall, the findings show that image segmentation has been widely adopted in stroke research for early detection of clinical signs and post-stroke evaluation, delivering promising outcomes. This study provides an up-to-date synthesis of impactful research, highlighting global trends and recent advancements in ischemic stroke medical image segmentation

    Secure e-voting system using Schorr's zero-knowledge identification protocol

    Get PDF
    In today's era of technological progress, the electoral system has changed significantly with the introduction of electronic voting (e-voting). The traditional voting system poses many vulnerabilities to manipulation, potential human error, and problems with voter privacy. These limitations can lead to reduced trust and participation in elections. E-voting has emerged to address this issue, aiming to improve the convenience, security, and privacy of voters. E-voting systems are evaluated on accuracy, security, privacy, and transparency; however, ensuring voter privacy while maintaining these principles remains a significant challenge. A potential solution to improving privacy in e-voting is Schorr's zero-knowledge identification protocol. This protocol allows voters to confirm their identity without revealing personal information, maintaining voter privacy throughout the process. By implementing these protocols, the e-voting system can strengthen security and privacy, making elections more transparent and trustworthy. As technology evolves, adopting solutions like Schorr's zero-knowledge identification protocol can help e-voting systems meet the growing demand for safe, fair, and private elections

    Artificial intelligence-powered robotics across domains: challenges and future trajectories

    Get PDF
    The rise of artificial intelligence (AI) in robotic systems raises both challenges and opportunities. This technological change necessitates rethinking workforce skills, resulting in new qualifications and potentially outdated jobs. Advancements in AI-based robots have made operations more efficient and precise, but they also raise ethical issues such as job loss and responsibility for robot decisions. This study explores AI-powered robotics in both of their challenges and future trajectories. As AI in robotics continues to grow, it will be crucial to tackle these issues through strong rules and ethical standards to ensure safe and fair progress. Collaborative robots in manufacturing improve safety and increase productivity by working alongside human employees. Autonomous robots reduce human mistakes during checks, leading to better product quality and lower operational expenses. In healthcare, robotic helpers improve patient care and medical staff performance by managing routine tasks. Future research should focus on improving efficiency and accuracy, boosting productivity, and creating safe environments for humans and robots to work safely together. Strong rules and ethical guidelines will be vital for integrating AI-powered robotics into different areas, ensuring technology development aligns with societal values and needs

    Optimizing EfficientNet for imbalanced medical image classification using grey wolf optimization

    Get PDF
    The advancement of deep learning in computer vision has result in substantial progress, particularly in image classification tasks. However, challenges arise when the model is applied to small and unbalanced datasets, such as X-ray data in medical applications. This study aims to improve the classification performance of fracture X-ray images using the EfficientNet architecture optimized with grey wolf optimization (GWO). EfficientNet was chosen for its efficiency in handling small datasets, while GWO was applied to optimize hyperparameters, including learning rate, weight decay, and dropout to improve model accuracy. Random cropping, rotation, flipping, color jittering, and random erasing, were used to expand the diversity of the dataset, and class weighting is applied to overcome class imbalance. The evaluation uses accuracy, precision, recall, and F1-score metrics. The combination of EfficientNetB0 and GWO resulted in an average 4.5% improvement in model performance over baseline methods. This approach provides benefits in developing deep learning methods for medical image classification, especially in dealing with small and imbalanced datasets

    Attack detection in internet of things networks with deep learning using deep transfer learning method

    Get PDF
    Cybersecurity becomes a crucial part within the information management framework of internet of things (IoT) device networks. The large-scale distribution of IoT networks and the complexity of communication protocols used are contributing factors to the widespread vulnerabilities of IoT devices. The implementation of transfer learning models in deep learning can achieve optimal performance faster than traditional machine learning models, as they leverage knowledge from previous models that already understand these features. Base model was built using the 1-dimension convolutional neural network (1D-CNN) method, using training and test data from the source domain dataset. Model 1 was constructed using the same method as base model. The test and training data used for model 1 were from the target domain dataset. This model successfully detected known attacks at a rate of 99.352%, but did not perform well in detecting unknown attacks, with an accuracy of 84.645%. Model 2 is an enhancement of model 1, incorporating transfer learning from the base model. Its results significantly improved compared to model 1 testing. Model 2 has an accuracy and precision rate of 98.86% and 99.17 %, respectively, allowing it to detect previously unknown attacks. Even with a slight decrease in normal detection, most attacks can still be detected

    An ensemble learning approach for diabetes prediction using the stacking method

    Get PDF
    Diabetes is a severe illness characterized by high blood glucose levels. Machine learning algorithms, with their ability to detect and predict diabetes in its early stages, offer a promising avenue for research. This study sought to enhance the accuracy of predicting diabetes mellitus by employing the stacking method. The stacking method was chosen because it integrates predictions from various base models, resulting in a more precise final prediction. The stacking method enhances accuracy and generalization by utilizing the varied strengths of multiple base models. The Pima Indians diabetes dataset, a widely used benchmark dataset, was utilized in the study. The machine learning models used for the studies were logistic regression (LR), naïve Bayes (NB), extreme gradient boost (XGBoost), K-nearest neighbor (KNN), decision tree (DT), and support vector machine (SVM). LR, KNN, and SVM were the best-performing models based on accuracy, F1-score, precision, and area under the curve (AUC) score, and were consequently used as the base model for the stacking method. The LR model was utilized for the meta-model. The proposed ensemble approach using the stacking method demonstrated a high accuracy of 82.4%, better than the individual models and other ensemble techniques such as bagging or boosting. This study advances diabetes prediction by developing a more accurate early-stage detection model, thereby improving clinical management of the disease

    Analysis of telehealth acceptance for basic life support training in sudden cardiac arrest in Pontianak

    Get PDF
    Sudden cardiac arrest (SCA), which is one of the most prevalent causes of mortality, can be prevented by quickly conducting basic life support (BLS). In Pontianak City, the challenges associated with obtaining emergency health training, such as BLS, remain high. This study aims to evaluate user acceptance of Telehealth as well as its effectiveness in BLS training. We will also discuss its impact on community knowledge and skills in managing cardiac arrest. We used the human, organization, and technology Fit (HOT-Fit) method to analyze the level of acceptance of Telehealth in BLS training. We collected data from 60 respondents who underwent Telehealth-based BLS training. The results showed that participants' understanding and readiness in dealing with heart attack emergencies had increased significantly, by 90% and 92%, respectively. Analysis of the level of acceptance with HOT-Fit showed that system quality had the greatest influence on system use (0.611). Service quality exerted the most significant impact on user satisfaction (0.568). The net benefit was influenced by system use, user satisfaction, and organizational support, with user satisfaction having the greatest influence (0.600). Further research will be conducted on the utilization of augmented reality (AR) or virtual reality (VR) technology to implement Telehealth for BLS training

    Blockchain technology for optimizing security and privacy in distributed systems

    Get PDF
    Blockchain technology is increasingly recognized as an effective solution for addressing security and privacy challenges in distributed systems. Blockchain ensures information security by validating data and defending against cyber threats, while guaranteeing data integrity through transaction validation and reliable storage. The research involves a literature study, problem identification, analysis of blockchain security and privacy, model development, testing, and analysis of trial results. Furthermore, blockchain enables user anonymity and fosters transparency by utilizing a distributed network, reducing the risk of fraudulent activities. Its decentralized nature ensures high reliability and accessibility, even in node failures. Blockchain enhances security and privacy by offering features like data immutability, provenance, and reduced reliance on trust. It decentralizes data storage, making tampering or deletion extremely challenging, and ensures the invalidation of subsequent blocks upon any changes. Blockchain finds applications in various domains, including supply chains, finance, healthcare, and government, enabling enhanced security by tracking data origin and ownership. Despite scalability and security challenges, the potential benefits of reduced costs, increased efficiency, and improved transparency position blockchain as a promising technology for the future. In summary, blockchain technology provides secure transaction recording and data storage, thus enhancing security, privacy, and the integrity of sensitive information in distributed systems

    Effects of hyperparameter tuning on random forest regressor in the beef quality prediction model

    Get PDF
    Prediction models for beef meat quality are necessary because production and consumption were significant and increasing yearly. This study aims to create a prediction model for beef freshness quality using the random forest regressor (RFR) algorithm and to improve the accuracy of the predictions using hyperparameter tuning. The use of near-infrared spectroscopy (NIRS) in predicting beef quality is an easy, cheap, and fast technique. This study used six meat quality parameters as prediction target variables for the test. The R² metric was used to evaluate the prediction results and compare the performance of the RFR with default parameters versus the RFR with hyperparameter tuning (RandomSearchCV). Using default parameters, the R-squared (R²) values for color (L*), drip loss (%), pH, storage time (hour), total plate colony (TPC in cfu/g), and water moisture (%) were 0.789, 0.839, 0.734, 0.909, 0.845, and 0.544, respectively. After applying hyperparameter tuning, these R² scores increased to 0.885, 0.931, 0.843, 0.957, 0.903, and 0.739, indicating an overall improvement in the model’s performance. The average performance increase for prediction results for all beef quality parameters is 0.0997 or 14% higher than the default parameters

    HepatoScan: Ensemble classification learning models for liver cancer disease detection

    Get PDF
    Liver cancer is a dangerous disease that poses significant risks to human health. The complexity of early detection of liver cancer increases due to the unpredictable growth of cancer cells. This paper introduces HepatoScan, an ensemble classification to detect and diagnose liver cancer tumors from liver cancer datasets. The proposed HepatoScan is the integrated approach that classifies the three types of liver cancers: hepatocellular carcinoma, cholangiocarcinoma, and angiosarcoma. In the initial stage, liver cancer starts in the liver, while the second stage spreads from the liver to other parts of the body. Deep learning is an emerging domain that develops advanced learning models to detect and diagnose liver cancers in the early stages. We train the pre-trained model InceptionV3 on liver cancer datasets to identify advanced patterns associated with cancer tumors or cells. For accurate segmentation and classification of liver lesions in computed tomography (CT) scans, the ensemble multi-class classification (EMCC) combines U-Net and mask region-based convolutional network (R-CNN). In this context, researchers use the CT scan images from Kaggle to analyze the liver cancer tumors for experimental analysis. Finally, quantitative results show that the proposed approach obtained an improved disease detection rate with mean squared error (MSE)-11.34 and peak signal-to-noise ratio (PSNR)-10.34, which is high compared with existing models such as fuzzy C-means (FCM) and kernel fuzzy C-means (KFCM). The classification results obtained based on detection rate with accuracy-0.97%, specificity-0.99%, recall-0.99%, and F1S-0.97% are very high compared with other existing models

    135

    full texts

    136

    metadata records
    Updated in last 30 days.
    Computer Science and Information Technologies (E-Journal)
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇