10 research outputs found

    Cyber Security against Intrusion Detection Using Ensemble-Based Approaches

    Get PDF
    The attacks of cyber are rapidly increasing due to advanced techniques applied by hackers. Furthermore, cyber security is demanding day by day, as cybercriminals are performing cyberattacks in this digital world. So, designing privacy and security measurements for IoT-based systems is necessary for secure network. Although various techniques of machine learning are applied to achieve the goal of cyber security, but still a lot of work is needed against intrusion detection. Recently, the concept of hybrid learning gives more attention to information security specialists for further improvement against cyber threats. In the proposed framework, a hybrid method of swarm intelligence and evolutionary for feature selection, namely, PSO-GA (PSO-based GA) is applied on dataset named CICIDS-2017 before training the model. The model is evaluated using ELM-BA based on bootstrap resampling to increase the reliability of ELM. This work achieved highest accuracy of 100% on PortScan, Sql injection, and brute force attack, which shows that the proposed model can be employed effectively in cybersecurity applications

    Human Activity Recognition Based on Deep-Temporal Learning Using Convolution Neural Networks Features and Bidirectional Gated Recurrent Unit With Features Selection

    No full text
    Recurrent Neural Networks (RNNs) and their variants have been demonstrated tremendous successes in modeling sequential data such as audio processing, video processing, time series analysis, and text mining. Inspired by these facts, we propose human activity recognition technique to proceed visual data via utilizing convolution neural network (CNN) and Bidirectional-gated recurrent unit (Bi-GRU). Firstly, we extract deep features from frames sequence of human activities videos using CNN and then select most important features from the deep appearances to improve performance and decrease computational complexity of the model. Secondly, to learn temporal motions of frames sequence, we design Bi-GRU and feed those deep-important features extracted from frames sequence of human activities to Bi-GRU which learn temporal dynamics in forward and backward direction at each time step. We conduct extensive experiments on realistic videos of human activity recognition datasets YouTube11, HMDB51 and UCF101. Lastly, we compare the obtained results with existing methods to show the competence of our proposed technique

    AHP-Based Systematic Approach to Analyzing and Evaluating Critical Success Factors and Practices for Component-Based Outsourcing Software Development

    No full text
    Component-based software development (CBSD) is a difficult method for creating complicated products or systems. In CBSD, multiple components are used to construct software or a product. A complex system or program can be created with CBSD quickly and with money while maintaining excellent quality and security. On the other hand, this research will persuade outsourced vendor companies to embrace CBSD approaches for component software development. We conducted a systemic literature review (SLR) to investigate the success factors that have a favorable impact on software outsourcing vendors’ organizations, and we selected 91 relevant research publications by creating a search string based on the study questions. This useful information was compiled using Google Scholar, IEEE Explore, MDPI, WILLEY Digital Library, and Elsevier. Furthermore, we completed all of the procedures in SLR for the full literature review, including the formulation of the SLR protocol, initial and final data collection, retrieval, assessment processes, and data synthesis. Among the ten (10) critical success factors we identified are a well-trained and skilled team, proper component selection, use of design standards, well-defined architecture, well-defined analysis and testing, well-defined integration, quality assurance, good organization of documentation, and well-organized security, and proper certification. Furthermore, the proposed SLR includes 46 best practices for these critical success factors, which could assist vendor organizations in enhancing critical success factors for CBOSD. According to our findings, the discovered success factors are similar and distinct across different periods, continents, databases, and approaches. The recommended SLR will also assist software vendor organizations in implementing the CBSD idea. We used the analytical hierarchy process (AHP) method to prioritize and analyze the success factors of component-based outsourcing software development and the result of different equations of the AHP approach to construct the pairwise comparison matrix. The largest eigenvalue was 3.096 and the CR value was 0.082, which is less than 0.1, and thus sufficient and acceptable

    AHP-Based Systematic Approach to Analyzing and Evaluating Critical Success Factors and Practices for Component-Based Outsourcing Software Development

    No full text
    Component-based software development (CBSD) is a difficult method for creating complicated products or systems. In CBSD, multiple components are used to construct software or a product. A complex system or program can be created with CBSD quickly and with money while maintaining excellent quality and security. On the other hand, this research will persuade outsourced vendor companies to embrace CBSD approaches for component software development. We conducted a systemic literature review (SLR) to investigate the success factors that have a favorable impact on software outsourcing vendors’ organizations, and we selected 91 relevant research publications by creating a search string based on the study questions. This useful information was compiled using Google Scholar, IEEE Explore, MDPI, WILLEY Digital Library, and Elsevier. Furthermore, we completed all of the procedures in SLR for the full literature review, including the formulation of the SLR protocol, initial and final data collection, retrieval, assessment processes, and data synthesis. Among the ten (10) critical success factors we identified are a well-trained and skilled team, proper component selection, use of design standards, well-defined architecture, well-defined analysis and testing, well-defined integration, quality assurance, good organization of documentation, and well-organized security, and proper certification. Furthermore, the proposed SLR includes 46 best practices for these critical success factors, which could assist vendor organizations in enhancing critical success factors for CBOSD. According to our findings, the discovered success factors are similar and distinct across different periods, continents, databases, and approaches. The recommended SLR will also assist software vendor organizations in implementing the CBSD idea. We used the analytical hierarchy process (AHP) method to prioritize and analyze the success factors of component-based outsourcing software development and the result of different equations of the AHP approach to construct the pairwise comparison matrix. The largest eigenvalue was 3.096 and the CR value was 0.082, which is less than 0.1, and thus sufficient and acceptable

    Prediction of Cardiovascular Disease on Self-Augmented Datasets of Heart Patients Using Multiple Machine Learning Models

    No full text
    About 26 million people worldwide experience its effects each year. Both cardiologists and surgeons have a tough time determining when heart failure will occur. Classification and prediction models applied to medical data allow for enhanced insight. Improved heart failure projection is a major goal of the research team using the heart disease dataset. The probability of heart failure is predicted using data mined from a medical database and processed by machine learning methods. It has been shown, through the use of this study and a comparative analysis, that heart disease may be predicted with high precision. In this study, researchers developed a machine learning model to improve the accuracy with which diseases like heart failure (HF) may be predicted. To rank the accuracy of linear models, we find that logistic regression (82.76 percent), SVM (67.24 percent), KNN (60.34 percent), GNB (79.31 percent), and MNB (72.41) perform best. These models are all examples of ensemble learning, with the most accurate being ET (70.31%), RF (87.03%), and GBC (86.21%). DT (ensemble learning models) achieves the highest degree of precision. CatBoost outperforms LGBM, HGBC, and XGB, all of which achieve 84.48% accuracy or better, while XGB achieves 84.48% accuracy using a gradient-based gradient method (GBG). LGBM has the highest accuracy rate (86.21 percent) (hypertuned ensemble learning models). A statistical analysis of all available algorithms found that CatBoost, random forests, and gradient boosting provided the most reliable results for predicting future heart attacks

    Detection of renal cell hydronephrosis in ultrasound kidney images: a study on the efficacy of deep convolutional neural networks

    No full text
    In the realm of medical imaging, the early detection of kidney issues, particularly renal cell hydronephrosis, holds immense importance. Traditionally, the identification of such conditions within ultrasound images has relied on manual analysis, a labor-intensive and error-prone process. However, in recent years, the emergence of deep learning-based algorithms has paved the way for automation in this domain. This study aims to harness the power of deep learning models to autonomously detect renal cell hydronephrosis in ultrasound images taken in close proximity to the kidneys. State-of-the-art architectures, including VGG16, ResNet50, InceptionV3, and the innovative Novel DCNN, were put to the test and subjected to rigorous comparisons. The performance of each model was meticulously evaluated, employing metrics such as F1 score, accuracy, precision, and recall. The results paint a compelling picture. The Novel DCNN model outshines its peers, boasting an impressive accuracy rate of 99.8%. In the same arena, InceptionV3 achieved a notable 90% accuracy, ResNet50 secured 89%, and VGG16 reached 85%. These outcomes underscore the Novel DCNN’s prowess in the realm of renal cell hydronephrosis detection within ultrasound images. Moreover, this study offers a detailed view of each model’s performance through confusion matrices, shedding light on their abilities to categorize true positives, true negatives, false positives, and false negatives. In this regard, the Novel DCNN model exhibits remarkable proficiency, minimizing both false positives and false negatives. In conclusion, this research underscores the Novel DCNN model’s supremacy in automating the detection of renal cell hydronephrosis in ultrasound images. With its exceptional accuracy and minimal error rates, this model stands as a promising tool for healthcare professionals, facilitating early-stage diagnosis and treatment. Furthermore, the model’s convergence rate and accuracy hold potential for enhancement through further exploration, including testing on larger and more diverse datasets and investigating diverse optimization strategies

    A High-Capacity Optical Metro Access Network: Efficiently Recovering Fiber Failures with Robust Switching and Centralized Optical Line Terminal

    No full text
    This study proposes and presents a new central office (CO) for the optical metro access network (OMAN) with an affordable and distinctive switching system. The CO’s foundation is built upon a novel optical multicarrier (OMC) generation technique. This technique provides numerous frequency carriers that are characterized by a high tone-to-noise ratio (TNR) of 40 dB and minimal amplitude excursions. The purpose is to accommodate multiple users at the optical network unit side in the optical metropolitan area network (OMAN). The OMC generation is achieved through a cascaded configuration involving a single phase and two Mach Zehnder modulators without incorporating optical or electrical amplifiers or filters. The proposed OMC is installed in the CO of the OMAN to support the 1.2 Tbps downlink and 600 Gbps uplink transmission, with practical bit error rate (BER) ranges from 10−3 to 10−13 for the downlink and 10−6 to 10−14 for the uplink transmission. Furthermore, in the OMAN’s context, optical fiber failure is a main issue. Therefore, we have proposed a possible solution for ensuring uninterrupted communication without any disturbance in various scenarios of main optical fiber failures. This demonstrates how this novel CO can rapidly recover transmission failures through robust switching a and centralized OLT. The proposed system is intended to provide users with a reliable and affordable service while maintaining high-quality transmission rates

    Effect of Feature Selection on the Accuracy of Music Popularity Classification Using Machine Learning Algorithms

    No full text
    This research aims to analyze the effect of feature selection on the accuracy of music popularity classification using machine learning algorithms. The data of Spotify, the most used music listening platform today, was used in the research. In the feature selection stage, features with low correlation were removed from the dataset using the filter feature selection method. Machine learning algorithms using all features produced 95.15% accuracy, while machine learning algorithms using features selected by feature selection produced 95.14% accuracy. The features selected by feature selection were sufficient for classification of popularity in established algorithms. In addition, this dataset contains fewer features, so the computation time is shorter. The reason why Big O time complexity is lower than models constructed without feature selection is that the number of features, which is the most important parameter in time complexity, is low. The statistical analysis was performed on the pre-processed data and meaningful information was produced from the data using machine learning algorithms

    Effect of Feature Selection on the Accuracy of Music Popularity Classification Using Machine Learning Algorithms

    No full text
    This research aims to analyze the effect of feature selection on the accuracy of music popularity classification using machine learning algorithms. The data of Spotify, the most used music listening platform today, was used in the research. In the feature selection stage, features with low correlation were removed from the dataset using the filter feature selection method. Machine learning algorithms using all features produced 95.15% accuracy, while machine learning algorithms using features selected by feature selection produced 95.14% accuracy. The features selected by feature selection were sufficient for classification of popularity in established algorithms. In addition, this dataset contains fewer features, so the computation time is shorter. The reason why Big O time complexity is lower than models constructed without feature selection is that the number of features, which is the most important parameter in time complexity, is low. The statistical analysis was performed on the pre-processed data and meaningful information was produced from the data using machine learning algorithms

    Investigating the Effectiveness of Novel Support Vector Neural Network for Anomaly Detection in Digital Forensics Data

    Get PDF
    As criminal activity increasingly relies on digital devices, the field of digital forensics plays a vital role in identifying and investigating criminals. In this paper, we addressed the problem of anomaly detection in digital forensics data. Our objective was to propose an effective approach for identifying suspicious patterns and activities that could indicate criminal behavior. To achieve this, we introduce a novel method called the Novel Support Vector Neural Network (NSVNN). We evaluated the performance of the NSVNN by conducting experiments on a real-world dataset of digital forensics data. The dataset consisted of various features related to network activity, system logs, and file metadata. Through our experiments, we compared the NSVNN with several existing anomaly detection algorithms, including Support Vector Machines (SVM) and neural networks. We measured and analyzed the performance of each algorithm in terms of the accuracy, precision, recall, and F1-score. Furthermore, we provide insights into the specific features that contribute significantly to the detection of anomalies. Our results demonstrated that the NSVNN method outperformed the existing algorithms in terms of anomaly detection accuracy. We also highlight the interpretability of the NSVNN model by analyzing the feature importance and providing insights into the decision-making process. Overall, our research contributes to the field of digital forensics by proposing a novel approach, the NSVNN, for anomaly detection. We emphasize the importance of both performance evaluation and model interpretability in this context, providing practical insights for identifying criminal behavior in digital forensics investigations
    corecore