1,698 research outputs found

    Balancing clusters to reduce response time variability in large scale image search

    Get PDF
    Many algorithms for approximate nearest neighbor search in high-dimensional spaces partition the data into clusters. At query time, in order to avoid exhaustive search, an index selects the few (or a single) clusters nearest to the query point. Clusters are often produced by the well-known kk-means approach since it has several desirable properties. On the downside, it tends to produce clusters having quite different cardinalities. Imbalanced clusters negatively impact both the variance and the expectation of query response times. This paper proposes to modify kk-means centroids to produce clusters with more comparable sizes without sacrificing the desirable properties. Experiments with a large scale collection of image descriptors show that our algorithm significantly reduces the variance of response times without seriously impacting the search quality

    Data Structures in Classical and Quantum Computing

    Get PDF
    This survey summarizes several results about quantum computing related to (mostly static) data structures. First, we describe classical data structures for the set membership and the predecessor search problems: Perfect Hash tables for set membership by Fredman, Koml\'{o}s and Szemer\'{e}di and a data structure by Beame and Fich for predecessor search. We also prove results about their space complexity (how many bits are required) and time complexity (how many bits have to be read to answer a query). After that, we turn our attention to classical data structures with quantum access. In the quantum access model, data is stored in classical bits, but they can be accessed in a quantum way: We may read several bits in superposition for unit cost. We give proofs for lower bounds in this setting that show that the classical data structures from the first section are, in some sense, asymptotically optimal - even in the quantum model. In fact, these proofs are simpler and give stronger results than previous proofs for the classical model of computation. The lower bound for set membership was proved by Radhakrishnan, Sen and Venkatesh and the result for the predecessor problem by Sen and Venkatesh. Finally, we examine fully quantum data structures. Instead of encoding the data in classical bits, we now encode it in qubits. We allow any unitary operation or measurement in order to answer queries. We describe one data structure by de Wolf for the set membership problem and also a general framework using fully quantum data structures in quantum walks by Jeffery, Kothari and Magniez

    Hardware acceleration for power efficient deep packet inspection

    Get PDF
    The rapid growth of the Internet leads to a massive spread of malicious attacks like viruses and malwares, making the safety of online activity a major concern. The use of Network Intrusion Detection Systems (NIDS) is an effective method to safeguard the Internet. One key procedure in NIDS is Deep Packet Inspection (DPI). DPI can examine the contents of a packet and take actions on the packets based on predefined rules. In this thesis, DPI is mainly discussed in the context of security applications. However, DPI can also be used for bandwidth management and network surveillance. DPI inspects the whole packet payload, and due to this and the complexity of the inspection rules, DPI algorithms consume significant amounts of resources including time, memory and energy. The aim of this thesis is to design hardware accelerated methods for memory and energy efficient high-speed DPI. The patterns in packet payloads, especially complex patterns, can be efficiently represented by regular expressions, which can be translated by the use of Deterministic Finite Automata (DFA). DFA algorithms are fast but consume very large amounts of memory with certain kinds of regular expressions. In this thesis, memory efficient algorithms are proposed based on the transition compressions of the DFAs. In this work, Bloom filters are used to implement DPI on an FPGA for hardware acceleration with the design of a parallel architecture. Furthermore, devoted at a balance of power and performance, an energy efficient adaptive Bloom filter is designed with the capability of adjusting the number of active hash functions according to current workload. In addition, a method is given for implementation on both two-stage and multi-stage platforms. Nevertheless, false positive rates still prevents the Bloom filter from extensive utilization; a cache-based counting Bloom filter is presented in this work to get rid of the false positives for fast and precise matching. Finally, in future work, in order to estimate the effect of power savings, models will be built for routers and DPI, which will also analyze the latency impact of dynamic frequency adaption to current traffic. Besides, a low power DPI system will be designed with a single or multiple DPI engines. Results and evaluation of the low power DPI model and system will be produced in future

    Fast Regular Expression Matching Using FPGA

    Get PDF
    V práci je vysvětluje několik algoritmů pro vyhledávání výrazů v textu. Algoritmy pracují v software i hardware. Část práce   se zabývá rozšířením konečných automatů. Další část práce vysvětluje, jak funguje hash a představuje koncept perfektního hashování a CRC. Součástí práce je návrh možné struktury  vyhledávací jednotky založené na deterministických konečných automatech v FPGA. V rámci práce byly provedeny exprimenty pro zjištění podoby výsledných konečných automatů.The thesis explains several algorithms for pattern matching. Algorithms work in both software and hardware. A part of the thesis is dedicated to extensions of finite automatons. The second part explains hashing and introduces concept of perfect hashing and CRC. The thesis also includes a suggestion of possible structure of a pattern matching unit based on deterministic finite automatons in FPGA. Experiments for determining the structure and size of resulting automatons were done in this thesis.

    MULTI-GIGABIT PATTERN FOR DATA IN NETWORK SECURITY

    Get PDF
    In the current scenario network security is emerging the world. Matching large sets of patterns against an incoming stream of data is a fundamental task in several fields such as network security or computational biology. High-speed network intrusion detection systems (IDS) rely on efficient pattern matching techniques to analyze the packet payload and make decisions on the significance of the packet body. However, matching the streaming payload bytes against thousands of patterns at multi-gigabit rates is computationally intensive. Various techniques have been proposed in past but the performance of the system is reducing because of multi-gigabit rates.Pattern matching is a significant issue in intrusion detection systems, but by no means the only one. Handling multi-content rules, reordering, and reassembling incoming packets are also significant for system performance. We present two pattern matching techniques to compare incoming packets against intrusion detection search patterns. The first approach, decoded partial CAM (DpCAM), pre-decodes incoming characters, aligns the decoded data, and performs logical AND on them to produce the match signal for each pattern. The second approach, perfect hashing memory (PHmem), uses perfect hashing to determine a unique memory location that contains the search pattern and a comparison between incoming data and memory output to determine the match. The suggested methods have implemented in vhdl coding and we use Xilinx for synthesis

    A Tree Locality-Sensitive Hash for Secure Software Testing

    Get PDF
    Bugs in software that make it through testing can cost tens of millions of dollars each year, and in some cases can even result in the loss of human life. In order to eliminate bugs, developers may use symbolic execution to search through possible program states looking for anomalous states. Most of the computational effort to search through these states is spent solving path constraints in order to determine the feasibility of entering each state. State merging can make this search more efficient by combining program states, allowing multiple execution paths to be analyzed at the same time. However, a merge with dissimilar path constraints dramatically increases the time necessary to solve the path constraint. Currently, there are no distance measures for path constraints, and pairwise comparison of program states is not scalable. A hashing method is presented that clusters constraints in such a way that similar constraints are placed in the same cluster without requiring pairwise comparisons between queries. When combined with other state-of-the-art state merging techniques, the hashing method allows the symbolic executor to execute more instructions per second and find more terminal execution states than the other techniques alone, without decreasing the high path coverage achieved by merging many states together

    Finite state automaton construction through regular expression hashing

    Get PDF
    In this study, the regular expressions forming abstract states in Brzozowski’s algorithm are not remapped to sequential state transition table addresses as would be the case in the classical approach, but are hashed to integers. Two regular expressions that are hashed to the same hash code are assigned the same integer address in the state transition table, reducing the number of states in the automaton. This reduction does not necessarily lead to the construction of a minimal automaton: no restrictions are placed on the hash function hashing two regular expressions to the same code. Depending on the quality of the hash function, a super-automaton, previously referred to as an approximate automaton, or an exact automaton can be constructed. When two regular expressions are hashed to the same state, and they do not represent the same regular language, a super-automaton is constructed. A super-automaton accepts the regular language of the input regular expression, in addition to some extra strings. If the hash function is bad, many regular expressions that do not represent the same regular language will be hashed together, resulting in a smaller automaton that accepts extra strings. In the ideal case, two regular expressions will only be hashed together when they represent the same regular language. In this case, an exact minimal automaton will be constructed. It is shown that, using the hashing approach, an exact or super-automaton is always constructed. Another outcome of the hashing approach is that a non-deterministic automaton may be constructed. A new version of the hashing version of Brzozowski’s algorithm is put forward which constructs a deterministic automaton. A method is also put forward for measuring the difference between an exact and a super-automaton: this takes the form of the k-equivalence measure: the k-equivalence measure measures the number of characters up to which the strings of two regular expressions are equal. The better the hash function, the higher the value of k, up to the point where the hash function results in regular expressions being hashed together if and only if they have the same regular language. Using the k-equivalence measure, eight generated hash functions and one hand coded hash function are evaluated for a large number of short regular expressions, which are generated using G¨odel numbers. The k-equivalence concept is extended to the average k-equivalence value in order to evaluate the hash functions for longer regular expressions. The hand coded hash function is found to produce good results. CopyrightDissertation (MEng)--University of Pretoria, 2009.Computer Scienceunrestricte
    corecore