23 research outputs found

    TIME AND SPACE COMPLEXITY ANALYSIS OF RSA AND ELGAMAL CRYPTOGRAPHIC ALGORITHMS ON MIXED DATA

    Get PDF
    The complexity study of algorithms, especially computationally intensive ones is of great significance in the field of complexity. Cryptographic algorithms are considered to be computationally intensive because they utilize a substantial number of computational resources, such as CPU memory and processing time. Cryptographic algorithms provide a solution to the security of data transmission whereby ensuring integrity, confidentiality and authentication of any form of data. However, there are still challenges of which cryptographic algorithms are suitable in terms of computation speed and memory usage. Whereas, a good number of research efforts have been put into experimenting on the complexities of the cryptographic algorithm on text, image and audio data, little has been done on video data. In this study, the time and space complexity of RSA and ElGamal cryptographic algorithms on mixed data was carried out. RSA and ElGamal cryptographic algorithms was implemented using C-sharp (C#) programming language to encrypt and decrypt text, image, audio and video dataset. In achieving the objectives of the study, both the implemented algorithms (RSA and ElGamal) are depicted using pseudocodes and flowcharts, while some of the datasets used were sourced from various online repositories. The time complexities of each dataset was obtained using the CPU internal clock while the space usage for each operations on each of the dataset was obtained using the computer internal memory. Tables and graphs was used to carry out the comparative analysis of both algorithms. The time and space complexity of RSA and ElGamal algorithms were experimented on text, image, audio and video dataset. The experimental results revealed that RSA outperformed ElGamal in terms of computational time during encryption of all categories of data. ElGamal outperformed RSA in terms of computational time during decryption of all categories of data. ElGamal algorithm outperformed RSA in terms of memory usage during encryption of all categories of data while both algorithms used relatively the same amount of space during decryption of all categories of data used. Based on the comparative analysis of the time and space complexity on both RSA and ElGamal algorithms, it was discovered that RSA is a better algorithm when it comes to time complexity, that is, RSA can be said to be a time-efficient algorithm. ElGamal algorithm performed better than RSA in the memory usage aspect, therefore the ElGamal algorithm is said to be a memory-efficient algorithm. Therefore, this study hereby recommend that other measurement metrics may be used to compare both algorithms in future works

    Computational Complexity of Modified Blowfish Cryptographic Algorithm on Video Data

    Get PDF
    Background: The technological revolution has allowed users to exchange data and information in various fields, and this is one of the most prevalent uses of computer technologies. However, in a world where third parties are capable of collecting, stealing, and destroying information without authorization, cryptography remains the primary tool that assists users in keeping their information secure using various techniques. Blowfish is an encryption process that is modest, protected, and proficient, with the size of the message and the key size affecting its performance. Aim: the goal of this study is to design a modified Blowfish algorithm by changing the structure of the F function to encrypt and decrypt video data. After which, the performance of the normal and modified Blowfish algorithm will be obtained in terms of time complexity and the avalanche effect. Methods: To compare the encryption time and security, the modified Blowfish algorithm will use only two S-boxes in the F function instead of the four used in Blowfish. Encryption and decryption times were calculated to compare Blowfish to the modified Blowfish algorithm, with the findings indicating that the modified Blowfish algorithm performs better. Results: The Avalanche Effect results reveal that normal Blowfish has a higher security level for all categories of video file size than the modified Blowfish algorithm, with 50.7176% for normal Blowfish and 43.3398% for the modified Blowfish algorithm of 187 kb; hence, it is preferable to secure data and programs that demand a high level of security with Blowfish. Conclusions: From the experimental results, the modified Blowfish algorithm performs faster than normal Blowfish in terms of time complexity with an average execution time of 250.0 ms for normal Blowfish and 248.4 ms for the modified Blowfish algorithm. Therefore, it can be concluded that the modified Blowfish algorithm using the F-structure is time-efficient while normal Blowfish is better in terms of security.publishedVersio

    Smart transit payment for university campus transportation using RFID card system

    Get PDF
    In the transportation business, we aim to be cost-efficient and effective in our customer service but with the traditional transit payment system, it is not so. Lately, transit companies all over the world are moving towards superior client service, nimbleness, receptiveness to necessities that diverge at a time scale that was absurd even two decades ago. The aim of this study was to create an electronic transit payment system that will allow for full pliability and solutions functionality that Covenant Universities and Nigerian transit companies should adopt to become more effective and efficient. We achieved this with the use of radio frequency identification (RFID) smart cards and card readers aiding a computer program that was programmed using C#. In addition, the program was simple and not expensive to implement in order to eliminate the mismanagement of ticket funds, loiter paper in bus stations, and so on. Together all this became our payment system

    Semantics-based clustering approach for similar research area detection

    Get PDF
    The manual process of searching out individuals in an already existing research field is cumbersome and time-consuming. Prominent and rookie researchers alike are predisposed to seek existing research publications in a research field of interest before coming up with a thesis. From extant literature, automated similar research area detection systems have been developed to solve this problem. However, most of them use keyword-matching techniques, which do not sufficiently capture the implicit semantics of keywords thereby leaving out some research articles. In this study, we propose the use of Ontology-based pre-processing, Latent Semantic Indexing and K-Means Clustering to develop a prototype similar research area detection system, that can be used to determine similar research domain publications. Our proposed system solves the challenge of high dimensionality and data sparsity faced by the traditional document clustering technique. Our system is evaluated with randomly selected publications from faculties in Nigerian universities and results show that the integration of ontologies in preprocessing provides more accurate clustering results

    Crypto-Stegno based model for securing medical information on IOMT platform

    Get PDF
    The integration of the Internet of Things in medical systems referred to as the Internet of Medical Things (IoMT), which supports medical events for instance real-time diagnosis, remote monitoring of patients, real-time drug prescriptions, among others. This aids the quality of services provided by the health workers thereby improve patients’ satisfaction. However, the integrity and confidentiality of medical information on the IoMT platform remain one of the contentions that causes problems in medical services. Another serious concern with achieving protection for medical records is information confidentiality for patient’s records over the IoMT environment. Therefore, this paper proposed a Crypto-Stegno model to secure medical information on the IoMT environment. The paper validates the system on healthcare information datasets and revealed extraordinary results in respect to the quality of perceptibility, extreme opposition to data loss, extreme embedding capability and security, which made the proposed system an authentic strategy for resourceful and efficient medical information on IoTM platform

    Development of an Improved Convolutional Neural Network for an Automated Face Based University Attendance System

    Get PDF
    Because of the flaws of the present university attendance system, which has always been time intensive, not accurate, and a hard process to follow. It, therefore, becomes imperative to eradicate or minimize the deficiencies identified in the archaic method. The identification of human face systems has evolved into a significant element in autonomous attendance-taking systems due to their ease of adoption and dependable and polite engagement. Face recognition technology has drastically altered the field of Convolution Neural Networks (CNN) however it has challenges of high computing costs for analyzing information and determining the best specifications (design) for each problem. Thus, this study aims to enhance CNN’s performance using Genetic Algorithm (GA) for an automated face-based University attendance system. The improved face recognition accuracy with CNN-GA got 96.49% while the face recognition accuracy with CNN got 92.54%

    The global burden of cancer attributable to risk factors, 2010-19 : a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background Understanding the magnitude of cancer burden attributable to potentially modifiable risk factors is crucial for development of effective prevention and mitigation strategies. We analysed results from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 to inform cancer control planning efforts globally. Methods The GBD 2019 comparative risk assessment framework was used to estimate cancer burden attributable to behavioural, environmental and occupational, and metabolic risk factors. A total of 82 risk-outcome pairs were included on the basis of the World Cancer Research Fund criteria. Estimated cancer deaths and disability-adjusted life-years (DALYs) in 2019 and change in these measures between 2010 and 2019 are presented. Findings Globally, in 2019, the risk factors included in this analysis accounted for 4.45 million (95% uncertainty interval 4.01-4.94) deaths and 105 million (95.0-116) DALYs for both sexes combined, representing 44.4% (41.3-48.4) of all cancer deaths and 42.0% (39.1-45.6) of all DALYs. There were 2.88 million (2.60-3.18) risk-attributable cancer deaths in males (50.6% [47.8-54.1] of all male cancer deaths) and 1.58 million (1.36-1.84) risk-attributable cancer deaths in females (36.3% [32.5-41.3] of all female cancer deaths). The leading risk factors at the most detailed level globally for risk-attributable cancer deaths and DALYs in 2019 for both sexes combined were smoking, followed by alcohol use and high BMI. Risk-attributable cancer burden varied by world region and Socio-demographic Index (SDI), with smoking, unsafe sex, and alcohol use being the three leading risk factors for risk-attributable cancer DALYs in low SDI locations in 2019, whereas DALYs in high SDI locations mirrored the top three global risk factor rankings. From 2010 to 2019, global risk-attributable cancer deaths increased by 20.4% (12.6-28.4) and DALYs by 16.8% (8.8-25.0), with the greatest percentage increase in metabolic risks (34.7% [27.9-42.8] and 33.3% [25.8-42.0]). Interpretation The leading risk factors contributing to global cancer burden in 2019 were behavioural, whereas metabolic risk factors saw the largest increases between 2010 and 2019. Reducing exposure to these modifiable risk factors would decrease cancer mortality and DALY rates worldwide, and policies should be tailored appropriately to local cancer risk factor burden. Copyright (C) 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license.Peer reviewe
    corecore