International Journal of Scientific Research in Network Security and Communication
Not a member yet
270 research outputs found
Sort by
Exploration of Blockchain Technology for Enhanced Data Security in Small and Medium Enterprises
Small and Medium Enterprises (SMEs) constitute a significant portion of the global business landscape, yet they often face challenges in fortifying their data security infrastructure. This research paper investigates the integration of blockchain technology to enhance data security in Small and Medium Enterprises (SMEs). SMEs, often constrained by limited resources, face significant challenges in securing sensitive information. The study assesses the current data security landscape in SMEs, identifying vulnerabilities and exploring the potential of blockchain as a robust solution. Emphasizing decentralization and tamper resistance, the paper highlights how blockchain can fortify data security, offering benefits such as improved integrity and transparent audit trails. Practical considerations, including integration challenges and user adoption, are addressed, providing actionable insights for SMEs seeking to fortify their data security through blockchain implementation. The research contributes to the evolving discourse on cybersecurity in SMEs, offering a foundation for practical applications in blockchain technology for enhanced data protection.
 
Design Of Microstrip Patch Antenna For Sixth Generation Frequency Band
The rapid advancement of technology and our growing reliance on it in our daily lives have led to a growing demand from users for higher data transfer speeds. The This work presents the design of rectangular microstrip single patch antennas as well as 1x2, 2x2, and 1x4 array patch antennas for 6G applications that operate in the 100 GHz–300 GHz frequency range. A rectangular patch antenna with copper conductivity material developed using Durod5880 unique substrate materials serves as the centrepiece of this arrangement. The dimensions are calculated using mathematical formulas and software, and this design was optimized and simulated using CST Studio Suite. Therefore, the assessed characteristics—return loss, bandwidth, gain, directivity, sidelobe magnitude, angular breadth (3 dB), input impedance, radiation efficiency, and VSWR—give adequate performance for the specified antennas. The array antenna is used to optimize the designated antenna in order to attain optimal performance.
 
Interactive Learning with a Digital Library Education in Science, Technology, and Engineering
Numerous extensive initiatives to create digital libraries have been spurred by the requirement for information systems to facilitate the distribution and reprocess of educational materials. Ideas like online learning are currently being applied to teaching and learning in education more often. But when it comes to adopting new technology techniques, STEM education is still lagging behind because of the nature of the subject matter. This discrepancy can be attributed to the fact that these areas frequently need laboratory exercises in order to effectively teach skills and offer practical experience. Making these laboratories available online is sometimes challenging. It is necessary to either reproduce the real lab as a totally software-based virtual lab or permit remote access to the real lab. New and developing technologies are presently being developed that can get around some of the possible challenges in this field. These comprise virtual worlds, computational dynamics, augmented reality, and computer graphics. The state of the art in virtual labs and virtual environments for science, technology, and engineering is compiled in this article. Therefore, many of the same learning processes may be observed in the usage of virtual labs for other scientific and non-robotic engineering purposes. This can involve encouraging the introduction of new ideas as part of studying science and technology, as well as the introduction of more general engineering knowledge. It can also involve encouraging more positive and cooperative training and education activities in the context of a more complex engineering subject, like robotics
Diabetes Predictor: Prediction Using Machine Learning Techniques
As the old saying goes, prevention is better than cure when it comes to health. The likelihood of saving lives can be greatly increased by anticipating diseases such as diabetes. Numerous variables, including age, obesity, lack of exercise, genetic predisposition, lifestyle, nutrition, and high blood pressure, can contribute to diabetes, an illness that is spreading quickly. With the help of machine learning techniques (MLT), healthcare professionals can now forecast patient outcomes using pre-existing data, which makes them indispensable tools. Several categorization machine learning methods are used in a diabetes prediction project to identify the most accurate model. This model takes into account extrinsic factors linked to diabetes risk in addition to conventional components like insulin, age, BMI, and glucose. Comprehending the natural glucose regulating process of the body is essential to understanding diabetes. The body uses glucose, which is obtained from foods high in carbohydrates, as its main energy source. The pancreas secretes insulin, which makes glucose easier for cells to use as fuel. On the other hand, diabetes is brought on by inadequate insulin synthesis or inadequate insulin use, which raise blood glucose levels. Here, skin thickness, number of conceptions, and pedigree function are additional characteristics that improve the model`s prediction power. These factors enhance the accuracy of diabetes risk assessment by adding to conventional markers and providing insightful information. Proactive illness prediction is made possible by utilizing MLT in the healthcare industry, especially for conditions like diabetes. The predicted accuracy of diabetes models can be greatly increased by incorporating both traditional and non-conventional risk indicators, such as skin thickness, number of pregnancies, and pedigree function. This will enable early intervention and better patient outcomes.
 
Empowering Students: Building an Integrated Application for Enhanced Productivity, Efficiency and Creativity
In the realm of education, students often encounter a myriad of challenges when it comes to managing their academic tasks efficiently and expressing their creativity effectively. This research paper delves into developing and implementing an integrated application designed specifically for students to streamline their workflows, enhance productivity, and foster creativity. By examining the features, functionalities, and potential impact of this application, we explore its capacity to address the diverse needs of students and revolutionize their academic experiences. By thoroughly examining and presenting empirical data, this study highlights how technology can revolutionize the future of education.
 
A Cloud-Based Machine Learning Approach for Blood Cell Classification using YOLOv5
Checking blood cell counts is crucial for diagnosing health issues. Traditionally, this involves manually counting cells under a microscope, a slow and tiring process. This research explores a new method using machine learning. A machine learning approach for automatic identification and counting of three types of blood cells using ‘you only look once’ (YOLO) object detection and classification algorithm. YOLO framework has been trained with a modified configuration BCCD Dataset of blood smear image to automatically identify and count red blood cells, white blood cells, and platelets. Moreover, this study with other convolutional neural network architectures considering architecture complexity, reported accuracy, and running time with this framework and compare the accuracy of the models for blood cells detection. Overall, the computer-aided system of detection and counting enables us to count blood cells from smear images in less than a second, which is useful for practical applications. Among the state-of-the-arts object detection algorithms such as regions with convolutional neural network (R-CNN), you only look once (YOLO), we chose YOLO framework which is about three times faster than Faster R-CNN with VGG-16 architecture. YOLO uses a single neural network to predict bounding boxes and class probabilities directly from the full image in one evaluation. We retrained YOLO framework to automatically identify and count RBCs, WBCs, and platelets from blood smear images. Also, the trained model has been tested with images from another dataset to observe the precision and accuracy to be around 95% with the recall-confidence to be 0.99.
 
Unlocking Network Security and QoS: The Fusion of SDN, IoT, and Machine Learning: A Comprehensive Analysis
The convergence of Software-Defined Networking (SDN) and the Internet of Things (IoT) has ushered in transformative changes, offering unparalleled levels of network flexibility, programmability, and connectivity. While this integration provides numerous benefits, it also introduces security challenges. Motivated by the imperative to fortify the security posture in this dynamically evolving landscape, this review paper explores the vulnerabilities, threats, and corresponding responses in the security landscape of SDN and IoT. Recognizing the critical need for proactive security measures, the paper underscores the potential of Quality of Service (QoS) empowered by Machine Learning (ML) as a solution. By harnessing ML, QoS emerges as a powerful means to proactively identify and mitigate potential attacks, offering an effective approach to enhance network security. The motivation behind integrating QoS with ML lies in its ability to ensure dependability, availability, and integrity, thereby instilling confidence in the reliability and resilience of the interconnected world. The paper goes through examination of challenges, delving into the proactive management of QoS within SDN, intricacies of IoT network architectures, and the unique features and limitations of IoT systems. Furthermore, it comprehensively addresses potential countermeasures for various security threats, such as Denial of Service (DOS), Man-in-the-Middle (MITM) attacks, and Ransomware attacks, particularly on devices with limited resources. This abstract provides a concise yet comprehensive overview of the paper`s motivations, emphasizing the urgency and significance of the proposed solutions for securing modern network environments
Security for AI and IoT Convergence: Novel Perspectives
The conjunction of Artificial Intelligence (AI) and the Internet of Things (IoT) presents a transformative synergy that holds immense promise for various domains, ranging from healthcare and smart cities to industrial automation and autonomous vehicles. However, this convergence also introduces a plethora of security challenges that demand innovative and novel perspectives to safeguard the integrity, confidentiality, and availability of data and systems. This paper explores the intricate landscape of "Security for AI and IoT Convergence" and introduces pioneering approaches and insights to mitigate the evolving threat landscape. Through a comprehensive literature review, we identify the current security challenges inherent in the intersection of AI and IoT, including vulnerabilities in connected devices, data privacy concerns, and the complex interplay between autonomous decision-making and real-time threat detection. We then present novel perspectives and methodologies that leverage cutting-edge technologies like machine learning, Blockchain, and interdisciplinary collaborations to address these challenges effectively. To ground our discussions, we offer real-world case studies that illustrate the practical implementation and impact of these novel security perspectives. We also delve into the evaluation metrics and considerations required to assess the efficacy of these security solutions. Additionally, we highlight the significance of on-going research, regulatory compliance, and ethical dimensions in shaping the future of AI and IoT convergence security. This paper not only serves as an essential reference for researchers and practitioners in the field but also underscores the imperative nature of continuous innovation and vigilance in ensuring the secure coexistence of AI and IoT technologies.
 
A Review on New Multilevel Scheduling Algorithm and SJF and Priority Scheduling Algorithms
The two CPU scheduling algorithms that received the majority of our attention in this paper after reviewing a variety of CPU scheduling algorithms were the shortest job first and priority scheduling algorithms, as well as an improved priority scheduling algorithm that performs better than current scheduling algorithms. Scheduling is the process of assigning tasks to the CPU to optimize use. Because the CPU is the most important resource in a computer system, numerous scheduling approaches aim to maximize its use. The purpose of this study is to explore the CPU scheduler`s construction of high-quality scheduling algorithms that meet the scheduling objectives and to study the performance of a multilevel scheduling algorithm that combines two scheduling algorithms.
 
A Review of Credit Card Fraud Detection Using Machine Learning
Nowadays fraud has been increasing due to the establishment of online payment mode on different E-commerce platform.A credit card is a form of payment that lets you buy goods or services on credit from an issuer, usually a bank. You can make purchases up to a specified limit and then pay them off over time either in full or with minimum payments.There are several types of security features including fraud protection, verified by visa and master card secure code, address verification systems, and biometric authentication. Additionally, some cards offer the additional security feature of a chip and pin system which requires that the cardholder enter a secret code to make purchases.Still fraud has been executed using this card. In this fraud, banks, merchants, and organisations are losing billions of dollars. According to one survey, the prevalence of credit card fraud is rising by 12.5% a year. It is crucial to identify fraud using secure and effective methods. Nowadays, hybrid algorithms and artificial neural networks are used to detect fraud since they perform better than other methods. We will use dataset variables like "duration," "amount of transaction," and "V1 to V28" as derived parameters for this. We will build a model that will separate out fraudulent transactions from other transactions using machine learning techniques or algorithms.