4 research outputs found

    Hardware Acceleration for Unstructured Big Data and Natural Language Processing.

    Full text link
    The confluence of the rapid growth in electronic data in recent years, and the renewed interest in domain-specific hardware accelerators presents exciting technical opportunities. Traditional scale-out solutions for processing the vast amounts of text data have been shown to be energy- and cost-inefficient. In contrast, custom hardware accelerators can provide higher throughputs, lower latencies, and significant energy savings. In this thesis, I present a set of hardware accelerators for unstructured big-data processing and natural language processing. The first accelerator, called HAWK, aims to speed up the processing of ad hoc queries against large in-memory logs. HAWK is motivated by the observation that traditional software-based tools for processing large text corpora use memory bandwidth inefficiently due to software overheads, and, thus, fall far short of peak scan rates possible on modern memory systems. HAWK is designed to process data at a constant rate of 32 GB/s—faster than most extant memory systems. I demonstrate that HAWK outperforms state-of-the-art software solutions for text processing, almost by an order of magnitude in many cases. HAWK occupies an area of 45 sq-mm in its pareto-optimal configuration and consumes 22 W of power, well within the area and power envelopes of modern CPU chips. The second accelerator I propose aims to speed up similarity measurement calculations for semantic search in the natural language processing space. By leveraging the latency hiding concepts of multi-threading and simple scheduling mechanisms, my design maximizes functional unit utilization. This similarity measurement accelerator provides speedups of 36x-42x over optimized software running on server-class cores, while requiring 56x-58x lower energy, and only 1.3% of the area.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116712/1/prateekt_1.pd

    Implémentations logicielle et matérielle de l'algorithme Aho-Corasick pour la détection d'intrusions

    Get PDF
    RÉSUMÉ Ce travail propose des méthodes et architectures efficaces pour l’implémentation de l’algorithme Aho-Corasick. Cet algorithme peut être utilisé pour la recherche de chaînes de caractères dans un système de détection d’intrusion, tels que Snort, pour les réseaux informatiques. Deux versions sont proposées, une version logicielle et une version matérielle. La première version développe une implémentation logicielle pour des processeurs à usage général. Pour cela, de nouvelles implémentations de l'algorithme tenant compte des ressources mémoire et de l’exécution séquentielle des processeurs ont été proposées. La deuxième version développe de nouvelles architectures de processeurs particularisés pour FPGA. Elles tiennent compte des ressources de calcul disponibles, des ressources mémoire et du potentiel de parallélisation à grain fin offert par le FPGA. De plus, une comparaison avec une version logicielle modifiée est effectuée. Dans les deux cas, les performances et les compromis pour la sélection de différentes structures de données de nœuds en mémoire ont été analysés. Une sélection de paramètres est proposée afin de maximiser la fonction objective de performance qui combine le nombre de cycles, la consommation mémoire et la fréquence d’horloge du système. Les paramètres permettent de déterminer lequel des deux ou des trois types de structures de données de nœuds (selon la version) sera choisi pour chaque nœud d’une machine à états. Lors de la validation, des scénarios de test utilisant des données variées ont été utilisés afin de s'assurer du bon fonctionnement de l'algorithme. De plus, les contenus des règles de Snort 2.9.7 ont été utilisés. La machine à états a été construite avec environ 26×103 chaînes de caractères qui sont toutes extraites de ces règles. La machine à états contient environ 381×103 nœuds. La contribution générale de ce mémoire est de montrer qu’il est possible, à travers l’exploration d’architectures, de sélectionner des paramètres afin d’obtenir un produit mémoire × temps optimal. Pour ce qui est de la version logicielle, la consommation mémoire diminue de 407 Mo à 21 Mo, ce qui correspond à une diminution de mémoire d’environ 20× par rapport au pire cas avec seulement un type de nœud. Pour ce qui est de la version matérielle, la consommation mémoire diminue de 11 Mo à 4 Mo, ce qui résulte en une diminution de mémoire d’environ 3× par rapport à la version logicielle modifiée. Pour ce qui est du débit, il augmente de 300 Mbps pour la version logicielle modifiée à 400 Mbps pour la version matérielle.----------ABSTRACT This work proposes effective methods and architectures for the implementation of the Aho-Corasick algorithm. This algorithm can be used for pattern matching in network-based intrusion detection systems such as Snort. Two versions are proposed, a software version and a hardware version. The first version develops a software implementation in C/C++ for general purpose processors. For this, new implementations of the algorithm, considering the memory resources and the processor’s sequential execution, are proposed. The second version develops an architecture in VHDL for a specialized processor on FPGA. For this, new architectures of the algorithm, considering the available computing resources, the memory resources and the inherent parallelism of FPGAs, are proposed. Furthermore, a comparison with a modified software version is performed. For both cases, we analyze the performance and cost trade-off from selecting different data structures of nodes in memory. A selection of parameters is used in order to maximize de performance objective function that combines the cycles count, the memory usage and the system’s frequency. The parameters determine which of two or three types of data structures of nodes (depending on the version) is selected for each node of the state machine. For the validation phase, test cases with diverse data are used in order to ensure that the algorithm operates properly. Furthermore, the Snort 2.9.7 rules are used. The state machine was built with around 26×103 patterns which are all extracted from these rules. The state machine is comprised of around 381×103 nodes. The main contribution of this work is to show that it is possible to choose parameters through architecture exploration, to obtain an optimal memory × time product. For the software version, the memory consumption is reduced from 407 MB to 21 MB, which results in a memory improvement of about 20× compared with the single node-type case. For the hardware version, the memory consumption is reduced from 11 MB to 4 MB, which results in a memory improvement of about 3× compared with the modified software version. For the throughput, it increases from 300 Mbps with the modified software version to 400 Mbps with the hardware version

    Research on Medical Question Answering System Based on Knowledge Graph

    Get PDF
    To meet the high-efficiency question answering needs of existing patients and doctors, this system integrates medical professional knowledge, knowledge graphs, and question answering systems that conduct man-machine dialogue through natural language. This system locates the medical field, uses crawler technology to use vertical medical websites as data sources, and uses diseases as the core entity to construct a knowledge graph containing 44,000 knowledge entities of 7 types and 300,000 entities of 11 kinds. It is stored in the Neo4j graph database, using rule-based matching methods and string-matching algorithms to construct a domain lexicon to classify and query questions. This system has specific practical value in the medical field knowledge graph and question answering system

    Let Knowledge Make Recommendations For You

    Get PDF
    The knowledge graph can make more accurate personalized recommendations for the recommendation system, but it is also interpretative and has traces to follow. The purpose of the recommendation system is to recommend a series of unobserved items for users. At present, recommendation systems based on knowledge graphs are mainly implemented in two ways: Embedding-based and path based. Embedding methods usually directly use information from the knowledge graph to enrich the representation of an item or user. Still, it failed to introduce multi-hop relations, and it is challenging to use semantic network information. A path-based recommendation algorithm utilizes the knowledge graph to gain multi-hop knowledge and compare the similarity between users or items to improve the recommendation effect. This paper (1) Aiming at the problem of how the recommendation algorithm effectively utilizes the semantically related information of knowledge, a self-attention-based knowledge representation learning model is designed to learn the semantic information of the entity-relationship by using the overall triplet of the entity-relationship to achieve high-quality knowledge features, Which brings more and more helpful information to the recommendation. (2) Constructing a content recommendation model with unified, embedded behavior and knowledge features, using historical user preferences combined with knowledge graphs to dynamically learn knowledge features to bring users more accurate and diverse recommendations. (3) Aiming at the problem of knowledge feature representation learning, a self-attention based knowledge representation learning model is proposed. Focusing on the difference in the importance of triples for determining entity semantics, the self-attention mechanism is used to learn semantics from triples to improve knowledge features. The quality of the representation provides high-quality auxiliary information for the recommendation system. The model’s performance is demonstrated through link prediction and triple classification experiments to prove the feasibility of the method proposed in this article
    corecore