11 research outputs found

    Toward better data veracity in mobile cloud computing: A context-aware and incentive-based reputation mechanism

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.As a promising next-generation computing paradigm, Mobile Cloud Computing (MCC) enables the large-scale collection and big data processing of personal private data. An important but often overlooked V of big data is data veracity, which ensures that the data used are trusted, authentic, accurate and protected from unauthorized access and modification. In order to realize the veracity of data in MCC, specific trust models and approaches must be developed. In this paper, a Category-based Context-aware and Recommendation incentive-based reputation Mechanism (CCRM) is proposed to defend against internal attacks and enhance data veracity in MCC. In the CCRM, innovative methods, including a data category and context sensing technology, a security relevance evaluation model, and a Vickrey-Clark-Groves (VCG)-based recommendation incentive scheme, are integrated into the process of reputation evaluation. Cost analysis indicates that the CCRM has a linear communication and computation complexity. Simulation results demonstrate the superior performance of the CCRM compared to existing reputation mechanisms under internal collusion attacks and bad mouthing attacks.This work is supported by the National Natural Science Foundation of China (61363068, 61472083, 61671360), the Pilot Project of Fujian Province (formal industry key project) (2016Y0031), the Foundation of Science and Technology on Information Assurance Laboratory (KJ-14-109) and the Fujian Provincial Key Laboratory of Network Security and Cryptology Research Fund

    Weight Based Deduplication for Minimizing Data Replication in Public Cloud Storage

    Get PDF
    260-269The approach to optimize the data replication in public cloud storage when targeting the multiple instances is one of the challenging issues to process the text data. The amount of digital data has been increasing exponentially. There is a need to reduce the amount of storage space by storing the data efficiently. In cloud storage environment, the data replication provides high availability with fault tolerance system. An effective approach of deduplication system using weight based method is proposed at the target level in order to reduce the unwanted storage spaces in cloud. Storage space can be efficiently utilized by removing the unpopular files from the secondary servers. Target level consumes less processing power than source level deduplication. Multiple input text documents are stored into dropbox cloud. The top text features are detected using the Term Frequency (TF) and Named Entity Recognition (NER) and they are stored in text database. After storing the top features in database, fresh text documents are collected to find the popular and unpopular files in order to optimize the existing text corpus of cloud storage. Top Text features of the freshly collected text documents are detected using TF and NER and these unique features after the removing the duplicate features cleaning are compared with the existing features stored in the database. On the comparison, relevant text documents are listed. After listing the text documents, document frequency, document weight and threshold factor are detected. Depending on average threshold value, the popular and unpopular files are detected. The popular files are retained in all the storage nodes to achieve the full availability of data and unpopular files are removed from all the secondary servers except primary server. Before deduplication, the storage space occupied in the dropbox cloud is 8.09 MB. After deduplication, the unpopular files are removed from secondary storage nodes and the storage space in the dropbox cloud is optimized to 4.82MB. Finally, data replications are minimized and 45.60% of the cloud storage space is efficiently saved by applying the weight based deduplication system

    WEIGHT BASED DEDUPLICATION FOR MINIMIZING DATA REPLICATION IN PUBLIC CLOUD STORAGE

    Get PDF
    The approach to minimize data replication in cloud storage is one of the challenging issues to process text data. The amount of digital data has been increasing exponentially. There is a need to reduce the amount of storage space by storing the data efficiently. In cloud storage environment, the data replication provides high availability with fault tolerance system. An effective approach of deduplication system using a weight based method is proposed at the target level in order to reduce the storage space in cloud. Storage space can be efficiently utilized by removing the unpopular files from the secondary servers. Target level consumes less processing power than Source level deduplication. Input text documents are stored into Dropbox cloud. The Term Frequency (TF) and Named Entity Recognition (NER) of the documents are found. The text features found are stored in database using MySQL. After storing features in database, fresh text documents are collected to find popular and unpopular files. TF and NER are found for the freshly collected text documents and duplicate features are removed to compare with the features stored in the database. On comparison, relevant text documents are listed. After listing text documents, document frequency, document weight and threshold factor are found. Depending on threshold factor, the popular and unpopular files are detected. The popular files are replicated in all the storage nodes to achieve availability. Before deduplication, the storage space occupied in the Dropbox cloud is 8.09MB. After deduplication, the unpopular files are removed from secondary storage nodes and the storage space occupied in the Dropbox cloud is 4.82MB. Finally, data replications are minimized and 45.6% of the cloud storage space is efficiently saved by applying weight based deduplication

    Heterogeneous Data Storage Management with Deduplication in Cloud Computing

    Full text link

    Unveiling the core of IoT: comprehensive review on data security challenges and mitigation strategies

    Get PDF
    The Internet of Things (IoT) is a collection of devices such as sensors for collecting data, actuators that perform mechanical actions on the sensor's collected data, and gateways used as an interface for effective communication with the external world. The IoT has been successfully applied to various fields, from small households to large industries. The IoT environment consists of heterogeneous networks and billions of devices increasing daily, making the system more complex and this need for privacy and security of IoT devices become a major concern. The critical components of IoT are device identification, a large number of sensors, hardware operating systems, and IoT semantics and services. The layers of a core IoT application are presented in this paper with the protocols used in each layer. The security challenges at various IoT layers are unveiled in this review paper along with the existing mitigation strategies such as machine learning, deep learning, lightweight encryption techniques, and Intrusion Detection Systems (IDS) to overcome these security challenges and future scope. It has been concluded after doing an intensive review that Spoofing and Distributed Denial of Service (DDoS) attacks are two of the most common attacks in IoT applications. While spoofing tricks systems by impersonating devices, DDoS attacks flood IoT systems with traffic. IoT security is also compromised by other attacks, such as botnet attacks, man-in-middle attacks etc. which call for strong defenses including IDS framework, deep neural networks, and multifactor authentication system

    Optimum parameter machine learning classification and prediction of Internet of Things (IoT) malwares using static malware analysis techniques

    Get PDF
    Application of machine learning in the field of malware analysis is not a new concept, there have been lots of researches done on the classification of malware in android and windows environments. However, when it comes to malware analysis in the internet of things (IoT), it still requires work to be done. IoT was not designed to keeping security/privacy under consideration. Therefore, this area is full of research challenges. This study seeks to evaluate important machine learning classifiers like Support Vector Machines, Neural Network, Random Forest, Decision Trees, Naive Bayes, Bayesian Network, etc. and proposes a framework to utilize static feature extraction and selection processes highlight issues like over-fitting and generalization of classifiers to get an optimized algorithm with better performance. For background study, we used systematic literature review to find out research gaps in IoT, presented malware as a big challenge for IoT and the reasons for applying malware analysis targeting IoT devices and finally perform classification on malware dataset. The classification process used was applied on three different datasets containing file header, program header and section headers as features. Preliminary results show the accuracy of over 90% on file header, program header, and section headers. The scope of this document just discusses these results as initial results and still require some issues to be addressed which may effect on the performance measures
    corecore