26 research outputs found

    An Enhanced Boyer-Moore Algorithm for WorstCase Running Time

    Get PDF
    This article adderesses the exact string matching problem which consists in finding all occurrences of a given pattern in a text.It is an extensively studied problem in the field of computer science mainly due to despite its popularity in diverse area of application such as cluster computing, image and signal processing, speech analysis and recognition, information retrieval, data compression,computational biology,intrusion detection and virus scanning detection.In the last decade several new algorithm has been proposed.In this paper we compares all improved of the Boyer-Moore algorithm with my enhanced Boyer-Moore algorithm practically and theoretically result.It is not only generate the largest distance but also produces the minimum shifting and frequency of comparisons steps.By this enhanced algorithm we can reduce the number of comparisons frequency and number of shifting steps during the searching process.Moreover result of this enhanced Boyer-Moore algorithm reveals the efficiency is higher than of previous improved Boyer-Moore algorithms and time complexity is reduced in the concept of worst case analysis and lower than BM algorithm.Our enhanced algorithm 16% boost-up than previous improved Boyer-Moore algorithm when executed on the CPU.This enhanced Boyer-Moore algorithm can be plays an important role in finding extremely fast genetic moleculer and complex sequence pattern of interested database alignment of DNA

    Data Privacy for Big Data Publishing Using Newly Enhanced PASS Data Mining Mechanism

    Get PDF
    Anonymization is one of the main techniques that is being used in recent times to prevent privacy breaches on the published data; one such anonymization technique is k-anonymization technique. The anonymization is a parametric anonymization technique used for data anonymization. The aim of the k-anonymization is to generalize the tuples in a way that it cannot be identified using quasi-identifiers. In the past few years, we saw a tremendous growth in data that ultimately led to the concept of the big data. The growth in data made anonymization using conventional processing methods inefficient. To make the anonymization more efficient, we used the proposed PASS mechanism in Hadoop framework to reduce the processing time of anonymization. In this work, we have divided the whole program into the map and reduce part. Moreover, the data types used in Hadoop provide better serialization and transport of data. We performed our experiments on the large dataset. The results proved the best efficiency of our implementation

    Improved k-Anonymize and l-Diverse Approach for Privacy Preserving Big Data Publishing Using MPSEC Dataset

    Get PDF
    Data exposure and privacy violations may happen when data is exchanged between organizations. Data anonymization gives promising results for limiting such dangers. In order to maintain privacy, different methods of k-anonymization and l-diversity have been widely used. But for larger datasets, the results are not very promising. The main problem with existing anonymization algorithms is high information loss and high running time. To overcome this problem, this paper proposes new models, namely Improved k-Anonymization (IKA) and Improved l-Diversity (ILD). IKA model takes large k-value using a symmetric as well as an asymmetric anonymizing algorithm. Then IKA is further categorized into Improved Symmetric k-Anonymization (ISKA) and Improved Asymmetric k-Anonymization (IAKA). After anonymizing data using IKA, ILD model is used to increase privacy. ILD will make the data more diverse and thereby increasing privacy. This paper presents the implementation of the proposed IKA and ILD model using real-time big candidate election dataset, which is acquired from the Madhya Pradesh State Election Commission, India (MPSEC) along with Apache Storm. This paper also compares the proposed model with existing algorithms, i.e. Fast clustering-based Anonymization for Data Streams (FADS), Fast Anonymization for Data Stream (FAST), Map Reduce Anonymization (MRA) and Scalable k-Anonymization (SKA). The experimental results show that the proposed models IKA and ILD have remarkable improvement of information loss and significantly enhanced the performance in terms of running time over the existing approaches along with maintaining the privacy-utility trade-off

    Comparison of pipelined IEEE-754 standard floating point multiplier with unpipelined multiplier

    No full text
    900-904The IEEE-754 standard floating point multiplier that provides highly precise computations to achieve high throughput and low area on the IC have been improved by insertion of pipelining technique. Floating point multiplier-using pipelining has been simulated, analyzed and its superiority over traditional designs is discussed. To achieve pipelining, one must subdivide the input process into sequence subtasks, each of which can be executed by specialized hardware stage that operates concurrently with other stages in the pipeline without the need of extra computing units. Detailed synthesis and simulation report operated upon Xilinx ISE 5.2i and Modelsim software is given. Hardware design is implemented on Virtex FPGA chips

    Energy and performance improvement using real-time DVFS for graph traversal on GPU

    No full text

    Hardware for Calculation of SIN and COSINE Angle using CORDIC Algorithm

    No full text
    Trigonometric functions have wide variety of applications in real life. Specially SIN and COSINE waves have been very useful in medical science, signal processing, geology, electronic communication, thermal analysis and many more. Real life application requires fast calculation capabilities as much as possible. Hardware, due to its hardwired design, provides high speed calculations for such application. This paper presents a hardware design that calculates SIN and COSINE value of a given angle using COordinate Rotation DIgital Computer (CORDIC) algorithm

    Comparison of pipelined IEEE-754 standard floating point adder with unpipelined adder

    No full text
    354-357Many Digital Signal Processing (DSP) algorithms use floating-point arithmetic, which requires millions of calculations per second to be performed. For such stringent requirements, design of fast, precise and efficient circuits is the goal of every VLSI designer. This paper presents a comparison of pipelined floating-point adder complaint with IEEE 754 format with an unpipelined adder also complaint with IEEE 754 format. It describes the IEEE floating-point standard 754. A pipelined floating point adder based on IEEE 754 format is developed and the design is compared with that of an unpipelined floating point adder and a rigorous analysis is done for speed, area, and power considerations. The functional partitioning of the adder into four distinct stages operates simultaneously for different serial input data stream. It not only increases the speed but also is energy efficient. All these improvements are at the cost of slight increase in the chip area. The basic methodology and approach used for VHDL (Very Large Scale Integration Hardware Descriptive Language) implementation of the floating-point adder are also described. Detailed synthesis report operated upon Xilinx ISE 5.2i software and Modelsim is given. The hardware design is implemented on Spartan IIE FPGA chip

    Energy and performance improvement using real-time DVFS for graph traversal on GPU

    No full text

    Fast parallel PageRank technique for detecting spam web pages

    No full text

    Quantum principal component analysis based on the dynamic selection of eigenstates

    No full text
    Quantum principal component analysis is a dimensionality reduction method to select the significant features of a dataset. A classical method finds the solution in polynomial time, but when the dimension of feature space scales exponentially, it is inefficient to compute the matrix exponentiation of the covariance matrix. The quantum method uses density matrix exponentiation to find principal components with exponential speedup. We enhance the existing algorithm that applies amplitude amplification using range-based static selection of eigenstates on the output of phase estimation. So, we propose an equivalent quantum method with the same complexity using a dynamic selection of eigenstates. Our algorithm can efficiently find phases of equally likely eigenvalues based on the similarity scores. It obtains principal components associated with highly probable larger eigenvalues. We analyze these methods on various factors to justify the resulting complexity of a proposed method as effective in quantum counterparts
    corecore