23 research outputs found
Towards ensuring scalability, interoperability and efficient access control in a triple-domain grid-based environment
Philosophiae Doctor - PhDThe high rate of grid computing adoption, both in academe and industry, has posed
challenges regarding efficient access control, interoperability and scalability. Although several methods have been proposed to address these grid computing challenges, none has proven to be completely efficient and dependable. To tackle these challenges, a novel access control architecture framework, a triple-domain grid-based environment, modelled on role based access control, was developed. The architecture’s framework assumes three domains, each domain with an independent Local Security Monitoring Unit and a Central Security Monitoring Unit that monitors security for the entire grid.The architecture was evaluated and implemented using the G3S, grid security services simulator, meta-query language as “cross-domain” queries and Java Runtime Environment 1.7.0.5 for implementing the workflows that define the model’s task. The simulation results show that the developed architecture is reliable and efficient if measured against the observed parameters and entities. This proposed framework for access control also proved to be interoperable and scalable within the parameters tested
Using Four Learning Algorithms for Evaluating Questionable Uniform Resource Locators (URLs)
Malicious Uniform Resource Locator (URL) is a common and serious threat to cyber security. Malicious URLs host unsolicited contents (spam, phishing, drive-by exploits, etc.) and lure unsuspecting internet users to become victims of scams such as monetary loss, theft, loss of information privacy and unexpected malware installation. This phenomenon has resulted in the increase of cybercrime on social media via transfer of malicious URLs. This situation prompted an efficient and reliable classification of a web-page based on the information contained in the URL to have a clear understanding of the nature and status of the site to be accessed. It is imperative to detect and act on URLs shared on social media platform in a timely manner. Though researchers have carried out similar researches in the past, there are however conflicting results regarding the conclusions drawn at the end of their experimentations. Against this backdrop, four machine learning algorithms:Naïve Bayes Algorithm, K-means Algorithm, Decision Tree Algorithm and Logistic Regression Algorithm were selected for classification of fake and vulnerable URLs. The implementation of algorithms was implemented with Java programming language. Through statistical analysis and comparison made on the four algorithms, Naïve Bayes algorithm is the most efficient and effective based on the metrics used
Detecting Malicious and Compromised URLs in E-Mails Using Association Rule
The rate of cybercrime is on the rise as more people embrace technology in their different spheres of live. Hackers are daily exploiting the anonymity and speed which the internet offers to lure unsuspecting victims into disclosing personal and confidential information through social engineering, phishing mails and sites and promises of great rewards which are never received. Thus resulting in great loss of property, finances, life, etc. and harm to their victims. This research work seeks to evaluate ways of protecting users from malicious Uniform Resource Locators (URLs) embedded in the emails they receive. The aim is to evaluate ways of identifying malicious URLs in emails by classifying them based on their lexical and hostname features. This study is conducted by extracting features from URLs sourced from phishing tank and DMOZ and adopting Association Rule of classification in building a URL classifier that analyzed extracted features of a URL and use it in predicting if it is malicious or not. 0.546 level of accuracy and an error rate of 0.484 was achieved as multiple URL features were employed in the classification process
Digital Twin Technology: A Review of Its Applications and Prominent Challenges
Digital twin is a virtual representation of physical product that is used as benchmark to evaluate, diagnose, optimize and supervise operational performance of products before venturing into mass full production in accordance with global standard. Digital twin merges virtual and physical objects together via sensors and IoT to transmit data and keep traces of objects interactivity within present environments. In virtual model environment, digital twin permits product troubleshooting and testing to minimize rate of failure and product defects during product manufacturing to enhance effectiveness and customers’ satisfaction. Digital twin is utilized throughout product life-cycle to simulate, optimize and predict product quality before final production is financed. Digital twin is beneficial to modern digital society because attitude of modern factory workers can be boosted to improve motivation to work. Digital twin has come to stay, future product suppliers may be required to put forward digital twin of their products beforehand for virtual lab testing before making order while suppliers that fail to comply may be left over. With emergence of digital twin, virtual testing can be conducted on proposed products before finding their ways into physical marketplaces. Business sector remains most beneficiaries of digital twin to predict present and future state of physical product via digital peer analysis. Today, digital twin application can support enterprises by improving product performances, decision making and customers’ satisfactions on logistic and operational workflow. However, in this survey of digital twin research, efforts have been made to review in detail about digital twin, its impact and benefits to modern society, its architecture; security challenges and how solutions are proffered. It is believed that ICT experts, manufacturers and industries will leverage on this research to improve QoS (Quality of Service) for new and future products to take full advantage of profits on investment returns via digital twin
Comparative Analysis of Encryption Algorithms
the vulnerable nature of some sensitive and classified information such as health and bank related data has undoubtedly caused serious havoc to individuals who should enjoy the privacy and confidentiality of their information. In an attempt to guarantee absolute security of information from one source to another and also to prevent confidential information from being revealed to unauthorized people, encryption algorithms are being used to achieve this. Encryption algorithms are basically useful for securing and protecting data being transmitted from one end to another from any form of vulnerability. Over the years, researchers have adopted some of these algorithms to ensure privacy of information in banking, health and military. Some of these algorithms are varied in terms of efficiency, accuracy, reliability and response time whenever they are used for data protection. In an attempt to carry out a comparative assessment, we considered Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES) and Data Encryption Standard (DES) algorithms. Since there is skepticism on which of the algorithms is more reliable, dependable and functional when considering features that characterized their variation, this work therefore, attempts to do a comparative assessment of each of the encryption algorithms to ascertain the best using the stated metrics. The implementation was carried out with C#. The results obtained from the experimentation revealed that AES uses the lowest time for encryption while RSA consumes longest encryption time. Also, AES algorithm is considered the most efficient of all the three algorithms based on the metrics used for the evaluation. Few of the results obtained are presented in this paper
Performance Evaluation of Covolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) for Intrusion Detection
In the context of cybersecurity, effective intrusion detection plays a crucial role in safeguarding computer networks and systems from malicious activities. The motivation for this project stems from the increasing complexity and sophistication of cyberattacks, which necessitates the development of advanced and accurate intrusion detection models. The aim of this work is to perform a comprehensive evaluation of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) for intrusion detection. CNN and RNN are two popular deep learning architectures known for their ability to extract meaningful patterns and temporal dependencies, respectively, making them suitable candidates for intrusion detection tasks. Two benchmark datasets: NSL-KDD and CICIDS2017 containing labeled network traffic data with various types of intrusions were employed and compared through multiple evaluation metrics. The results obtained from the experiments demonstrate the effectiveness of both CNN and RNN models in detecting intrusions. The CNN model achieved an accuracy of 86.40% on the NSL-KDD dataset and 95.20% on the CICIDS2017 dataset, while the RNN model achieved higher accuracy values of 96.20% and 94.10% on the respective datasets. Additionally, precision, recall, F1-score, error rate and other metrics were calculated and compared for both models. The results highlight the superiority of RNN in the NSL-KDD dataset and CNN in the CICIDS2017 datasets in terms of accuracy on the evaluated datasets. These findings contribute to the body of knowledge in the field of intrusion detection and can guide the selection and deployment of appropriate models for real-world applications, ultimately enhancing the security of computer networks and systems
Machine learning approach for identifying suspicious uniform resource locators (URLs) on Reddit social network
The applications and advantages of the Internet for real-time information sharing can never be over-emphasized. These great benefits are too numerous to mention but they are being seriously hampered and made vulnerable due to phishing that is ravaging cyberspace. This development is, undoubtedly, frustrating the efforts of the Global Cyber Alliance – an agency with a singular purpose of reducing cyber risk. Consequently, various researchers have attempted to proffer solutions to phishing. These solutions are considered inefficient and unreliable as evident in the conflicting claims by the authors. Against this backdrop, this work has attempted to find the best approach to solving the challenge of identifying suspicious uniform resource locators (URLs) on Reddit social networks. In an effort to handle this challenge, attempts have been made to address two major problems. The first is how can the suspicious URLs be identified on Reddit social networks with machine learning techniques? And the second is how can internet users be safeguarded from unreliable and fake URLs on the Reddit social network? This work adopted six machine learning algorithms – AdaBoost, Gradient Boost, Random Forest, Linear SVM, Decision Tree, and Naïve Bayes Classifier – for training using features obtained from Reddit social network and for additional processing. A total sum of 532,403 posts were analyzed. At the end of the analysis, only 87,083 posts were considered suitable for training the models. After the experimentation, the best performing algorithm was AdaBoost with an accuracy level of 95.5% and a precision of 97.57%.publishedVersio
A Predictive Model for Benchmarking the Performance of Algorithms for Fake and Counterfeit News Classification in Global Networks
publishedVersio