7 research outputs found

    Ransomware Detection Using Federated Learning with Imbalanced Datasets

    Full text link
    Ransomware is a type of malware which encrypts user data and extorts payments in return for the decryption keys. This cyberthreat is one of the most serious challenges facing organizations today and has already caused immense financial damage. As a result, many researchers have been developing techniques to counter ransomware. Recently, the federated learning (FL) approach has also been applied for ransomware analysis, allowing corporations to achieve scalable, effective detection and attribution without having to share their private data. However, in reality there is much variation in the quantity and composition of ransomware data collected across multiple FL client sites/regions. This imbalance will inevitably degrade the effectiveness of any defense mechanisms. To address this concern, a modified FL scheme is proposed using a weighted cross-entropy loss function approach to mitigate dataset imbalance. A detailed performance evaluation study is then presented for the case of static analysis using the latest Windows-based ransomware families. The findings confirm improved ML classifier performance for a highly imbalanced dataset.Comment: 6 pages, 4 figures, 3 table

    Improving Robustness of Deep Learning Models and Privacy-Preserving Image Denoising

    No full text
    Applications of deep learning models and convolutional neural networks have been rapidly increased. Although state-of-the-art CNNs provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adversarial attack is one type of attack that the attacker does not have any knowledge about the model or the training dataset, but it has some input data set and theirlabels. In this chapter, we propose a novel approach to generate a black-box attack in a sparse domain, whereas the most critical information of an image can be observed. Our investigation shows that large sparse (LaS) components play a crucial role in the performance of image classifiers. Under this presumption, to generate an adversarial example, we transfer an image into a sparse domain and add noise to the LaS components. We propose a comprehensive evaluation and analysis to support our idea in chapter one. In chapter two, we propose a new preprocessing approach that can enhance the robustness of skin lesion classification. Machine learning models based on convolutional neural networks have been widely used for automatic recognition of lesion diseases with high accuracy compared to conventional machine learning methods. In this research, we proposed a new preprocessing technique to extract the skin lesion dataset’s region of interest (RoI). We compare the performance of the most state-of-the-art convolutional neural networks classifiers with two datasets that contain (1) raw and (2) RoI extracted images. Our experiment results show that training CNN models by RoI extracted dataset can improve the prediction accuracy. It significantly decreases the evaluation and training time of the classification task. Finally, we propose a secure and robust image denoising approach. Image denoising aims to obtain the original image from its noisy measurements. While the quality of image denoising has been increasing over the years, the complexity and the required memory to implement the denoising task have also been increased accordingly. With such advancements and the unlimited computing resources available in the cloud, trends to transfer the image denoising task to the cloud have grown over the past years. However, it is still quite challenging to utilize cloud-based resources without compromising users’ data privacy while maintaining the quality of image denoising. In this chapter, we propose a novel lossless privacy-preserving image denoising approach that protects the users’ privacy and simultaneously keeps the quality of the denoising task. Our proposed approach is suitable for computationally constrained devices such as many IoT devices. In this method, we use two random keys to permute and perturb the noisy image patches. The cloud service provider implements the denoising task on the encrypted signal. After denoising, the output signal is still encrypted, and the real user who has access to the keys would be able to decrypt the denoised image. We evaluate the security of this method against known-plaintext, brute-force, and side-channel attacks. In addition, we theoretically prove the lossless property of this method. To verify the applicability of this approach, we implemented our experiments on multiple real images, and two well-known evaluation metrics were used to compare our results with the baseline

    Improving Robustness of Deep Learning Models and Privacy-Preserving Image Denoising

    No full text
    Applications of deep learning models and convolutional neural networks have been rapidly increased. Although state-of-the-art CNNs provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adversarial attack is one type of attack that the attacker does not have any knowledge about the model or the training dataset, but it has some input data set and theirlabels. In this chapter, we propose a novel approach to generate a black-box attack in a sparse domain, whereas the most critical information of an image can be observed. Our investigation shows that large sparse (LaS) components play a crucial role in the performance of image classifiers. Under this presumption, to generate an adversarial example, we transfer an image into a sparse domain and add noise to the LaS components. We propose a comprehensive evaluation and analysis to support our idea in chapter one. In chapter two, we propose a new preprocessing approach that can enhance the robustness of skin lesion classification. Machine learning models based on convolutional neural networks have been widely used for automatic recognition of lesion diseases with high accuracy compared to conventional machine learning methods. In this research, we proposed a new preprocessing technique to extract the skin lesion dataset’s region of interest (RoI). We compare the performance of the most state-of-the-art convolutional neural networks classifiers with two datasets that contain (1) raw and (2) RoI extracted images. Our experiment results show that training CNN models by RoI extracted dataset can improve the prediction accuracy. It significantly decreases the evaluation and training time of the classification task. Finally, we propose a secure and robust image denoising approach. Image denoising aims to obtain the original image from its noisy measurements. While the quality of image denoising has been increasing over the years, the complexity and the required memory to implement the denoising task have also been increased accordingly. With such advancements and the unlimited computing resources available in the cloud, trends to transfer the image denoising task to the cloud have grown over the past years. However, it is still quite challenging to utilize cloud-based resources without compromising users’ data privacy while maintaining the quality of image denoising. In this chapter, we propose a novel lossless privacy-preserving image denoising approach that protects the users’ privacy and simultaneously keeps the quality of the denoising task. Our proposed approach is suitable for computationally constrained devices such as many IoT devices. In this method, we use two random keys to permute and perturb the noisy image patches. The cloud service provider implements the denoising task on the encrypted signal. After denoising, the output signal is still encrypted, and the real user who has access to the keys would be able to decrypt the denoised image. We evaluate the security of this method against known-plaintext, brute-force, and side-channel attacks. In addition, we theoretically prove the lossless property of this method. To verify the applicability of this approach, we implemented our experiments on multiple real images, and two well-known evaluation metrics were used to compare our results with the baseline

    Improvement of Recovery in Segmentation-Based Parallel Compressive Sensing

    No full text
    This paper extends the recently introduced 1-D Kronecker-based Compressive Sensing (CS) recovery technique to 2-D signals and images. Traditionally large sensing matrices are used while compressing images using CS. CS when applied to individual columns of the image instead of the entire image during the sensing phase, leads to smaller sensing matrices and reduction in computational complexity. For achieving further reduction in computational complexity, the column vectors are further segmented into smaller length segments and CS is applied to each of the smaller length segments. This segmentation process reduces quality of the recovered signal. To enhance the quality of the recovered signal, the entire column vector is recovered using the Kronecker-based CS recovery technique. Magnetic Resonance (MR) images from NCIGT database were used to demonstrate the superiority of the Kronecker-based recovery for 2-D images. Structural similarity and reconstruction error were used to compare the results obtained from Kronecker-based recovery technique with non-Kronecker-based repeated recovery applied to each segments individually. Kronecker-based recovery showed improvement over non-Kronecker-based individual recovery even at higher CR

    Improvement of Signal Quality during Recovery of Compressively Sensed ECG Signals

    No full text
    This paper investigates the application of the newly proposed Kronecker-based method for the reconstruction of compressively sensed electrocardiogram (ECG) signals. By applying the Kronecker-based method, ECG signal acquisition is done in smaller lengths. Collection of smaller length of ECG signals leads to fewer arithmetic operations during the compression phase. Instead of recovering individually in smaller lengths, recovery is done over several concatenated segments. This newly proposed recovery method improves the quality of the reconstructed signal when compared to the traditional recovery done without concatenation. Two random measurement matrices, namely the Gaussian and the Bernoulli matrices, are considered as sensing matrices in this study and the methodology is evaluated using 10 ECG signals acquired from the MIT arrhythmia database. The random Bernoulli matrix is found to provide better quality of recovered compressed ECG signal even with the traditional compressive recovery techniques. Recovery by the newly proposed Kroneckerbased method results in higher SNR in all the ECG signals when the compression ratio (CR) is 25% or 50% and when the CR is 75%, improvement is observed in majority of the ECG signals. Lower CRs provide better reconstruction than higher CRs. The Kronecker-based recovery method may be useful for wearable ECG devices

    Improvement of Recovery in Segmentation-Based Parallel Compressive Sensing

    No full text
    This paper extends the recently introduced 1-D Kronecker-based Compressive Sensing (CS) recovery technique to 2-D signals and images. Traditionally large sensing matrices are used while compressing images using CS. CS when applied to individual columns of the image instead of the entire image during the sensing phase, leads to smaller sensing matrices and reduction in computational complexity. For achieving further reduction in computational complexity, the column vectors are further segmented into smaller length segments and CS is applied to each of the smaller length segments. This segmentation process reduces quality of the recovered signal. To enhance the quality of the recovered signal, the entire column vector is recovered using the Kronecker-based CS recovery technique. Results obtained from compression of slices of Magnetic Resonance (MR) images from NCIGT database show an improvement in signal quality, in terms of structural similarity and reconstruction error, when compared to the repeated recovery applied to each segment individually
    corecore