3 research outputs found

    Cryptanalytic Attacks on IDEA Block Cipher

    Get PDF
    International data encryption algorithm (IDEA) is a secret key or symmetric key block cipher. The purpose of IDEA was to replace data encryption standard (DES) cipher, which became practically insecure due to its small key size of 56 bits and increase in computational power of systems. IDEA cipher mainly to provide data confidentiality in variety of applications such as commercial and financial application e.g. pretty good privacy (PGP) protocol. Till 2015, no successful linear or algebraic weaknesses IDEA of have been reported. In this paper, author explained IDEA cipher, its application in PGP and did a systematic survey of various attacks attempted on IDEA cipher. The best cryptanalysis result which applied to all keys could break IDEA up to 6 rounds out of 8.5 rounds of the full IDEA cipher1. But the attack requires 264 known plaintexts and 2126.8 operations for reduced round version. This attack is practically not feasible due to above mention mammoth data and time requirements. So IDEA cipher is still completely secure for practical usage. PGP v2.0 uses IDEA cipher in place of BassOmatic which was found to be insecure for providing data confidentiality

    PF-FELM: A Robust PCA Feature Selection for Fuzzy Extreme Learning Machine

    No full text

    Improved classification of large imbalanced data sets using rationalized technique: Updated Class Purity Maximization Over_Sampling Technique (UCPMOT)

    No full text
    Abstract The huge variety of NoSQL Big Data has tossed a need for new pathways to store, process and analyze it. The quantum of data created is inconceivable along with a mixed breath of unknown veracity and creative visualization. The new trials of frameworks help to find substantial unidentified values from massive data sets. They have added an exceptional dimension to the pre-processing and contextual conversion of the data sets for needful analysis. In addition, handling of ambitious imbalanced data sets has acknowledged an intimation of alarm. Traditional classifiers are unable to discourse the precise need of grouping for such data sets. Over_sampling of the minority classes help to improve the performance. Updated Class Purity Maximization Over_Sampling Technique (UCPMOT) is a rationalized technique proposed to handle imbalanced data sets using exclusive safe-level based synthetic sample creation. It addresses the multi-class problem in alignment to a newly induced method namely lowest versus highest. The projected technique experiments with several data sets from the UCI repository. The underlying bed of mapreduce environment encompasses the distributed processing approach on Apache Hadoop framework. Several classifiers help to authorize the classification results using parameters like F-measure and AUC values. The experimental conclusions quote the dominance of UCPMOT over the benchmarking techniques
    corecore