32 research outputs found

    Security Applications of GPUs

    Get PDF
    Despite the recent advances in software security hardening techniques, vulnerabilities can always be exploited if the attackers are really determined. Regardless the protection enabled, successful exploitation can always be achieved, even though admittedly, today, it is much harder than it was in the past. Since securing software is still under ongoing research, the community investigates detection methods in order to protect software. Three of the most promising such methods are monitoring the (i) network, (ii) the filesystem, and (iii) the host memory, for possible exploitation. Whenever a malicious operation is detected then the monitor should be able to terminate it and/or alert the administrator. In this chapter, we explore how to utilize the highly parallel capabilities of modern commodity graphics processing units (GPUs) in order to improve the performance of different security tools operating at the network, storage, and memory level, and how they can offload the CPU whenever possible. Our results show that modern GPUs can be very efficient and highly effective at accelerating the pattern matching operations of network intrusion detection systems and antivirus tools, as well as for monitoring the integrity of the base computing systems

    木を用いた構造化並列プログラミング

    Get PDF
    High-level abstractions for parallel programming are still immature. Computations on complicated data structures such as pointer structures are considered as irregular algorithms. General graph structures, which irregular algorithms generally deal with, are difficult to divide and conquer. Because the divide-and-conquer paradigm is essential for load balancing in parallel algorithms and a key to parallel programming, general graphs are reasonably difficult. However, trees lead to divide-and-conquer computations by definition and are sufficiently general and powerful as a tool of programming. We therefore deal with abstractions of tree-based computations. Our study has started from Matsuzaki’s work on tree skeletons. We have improved the usability of tree skeletons by enriching their implementation aspect. Specifically, we have dealt with two issues. We first have implemented the loose coupling between skeletons and data structures and developed a flexible tree skeleton library. We secondly have implemented a parallelizer that transforms sequential recursive functions in C into parallel programs that use tree skeletons implicitly. This parallelizer hides the complicated API of tree skeletons and makes programmers to use tree skeletons with no burden. Unfortunately, the practicality of tree skeletons, however, has not been improved. On the basis of the observations from the practice of tree skeletons, we deal with two application domains: program analysis and neighborhood computation. In the domain of program analysis, compilers treat input programs as control-flow graphs (CFGs) and perform analysis on CFGs. Program analysis is therefore difficult to divide and conquer. To resolve this problem, we have developed divide-and-conquer methods for program analysis in a syntax-directed manner on the basis of Rosen’s high-level approach. Specifically, we have dealt with data-flow analysis based on Tarjan’s formalization and value-graph construction based on a functional formalization. In the domain of neighborhood computations, a primary issue is locality. A naive parallel neighborhood computation without locality enhancement causes a lot of cache misses. The divide-and-conquer paradigm is known to be useful also for locality enhancement. We therefore have applied algebraic formalizations and a tree-segmenting technique derived from tree skeletons to the locality enhancement of neighborhood computations.電気通信大学201

    A Parallel Computational Approach for String Matching- A Novel Structure with Omega Model

    Get PDF
    In r e cent day2019;s parallel string matching problem catch the attention of so many researchers because of the importance in different applications like IRS, Genome sequence, data cleaning etc.,. While it is very easily stated and many of the simple algorithms perform very well in practice, numerous works have been published on the subject and research is still very active. In this paper we propose a omega parallel computing model for parallel string matching. The algorithm is designed to work on omega model pa rallel architecture where text is divided for parallel processing and special searching at division point is required for consistent and complete searching. This algorithm reduces the number of comparisons and parallelization improves the time efficiency. Experimental results show that, on a multi - processor system, the omega model implementation of the proposed parallel string matching algorithm can reduce string matching time

    Scalable and fault-tolerant data stream processing on multi-core architectures

    Get PDF
    With increasing data volumes and velocity, many applications are shifting from the classical “process-after-store” paradigm to a stream processing model: data is produced and consumed as continuous streams. Stream processing captures latency-sensitive applications as diverse as credit card fraud detection and high-frequency trading. These applications are expressed as queries of algebraic operations (e.g., aggregation) over the most recent data using windows, i.e., finite evolving views over the input streams. To guarantee correct results, streaming applications require precise window semantics (e.g., temporal ordering) for operations that maintain state. While high processing throughput and low latency are performance desiderata for stateful streaming applications, achieving both poses challenges. Computing the state of overlapping windows causes redundant aggregation operations: incremental execution (i.e., reusing previous results) reduces latency but prevents parallelization; at the same time, parallelizing window execution for stateful operations with precise semantics demands ordering guarantees and state access coordination. Finally, streams and state must be recovered to produce consistent and repeatable results in the event of failures. Given the rise of shared-memory multi-core CPU architectures and high-speed networking, we argue that it is possible to address these challenges in a single node without compromising window semantics, performance, or fault-tolerance. In this thesis, we analyze, design, and implement stream processing engines (SPEs) that achieve high performance on multi-core architectures. To this end, we introduce new approaches for in-memory processing that address the previous challenges: (i) for overlapping windows, we provide a family of window aggregation techniques that enable computation sharing based on the algebraic properties of aggregation functions; (ii) for parallel window execution, we balance parallelism and incremental execution by developing abstractions for both and combining them to a novel design; and (iii) for reliable single-node execution, we enable strong fault-tolerance guarantees without sacrificing performance by reducing the required disk I/O bandwidth using a novel persistence model. We combine the above to implement an SPE that processes hundreds of millions of tuples per second with sub-second latencies. These results reveal the opportunity to reduce resource and maintenance footprint by replacing cluster-based SPEs with single-node deployments.Open Acces

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Face Recognition with Attention Mechanisms

    Get PDF
    Face recognition has been widely used in people’s daily lives due to its contactless process and high accuracy. Existing works can be divided into two categories: global and local approaches. The mainstream global approaches usually extract features on whole faces. However, global faces tend to suffer from dramatic appearance changes under the scenarios of large pose variations, heavy occlusions, and so on. On the other hand, since some local patches may remain similar, they can play an important role in such scenarios. Existing local approaches mainly rely on cropping local patches around facial landmarks and then extracting corresponding local representations. However, facial landmark detection may be inaccurate or even fail, which would limit their applications. To address this issue, attention mechanisms are applied to automatically locate discriminative facial parts, while suppressing noisy parts. Following this motivation, several models are proposed, including: the Local multi-Scale Convolutional Neural Networks (LS-CNN), Hierarchical Pyramid Diverse Attention (HPDA) networks, Contrastive Quality-aware Attentions (CQA-Face), Diverse and Sparse Attentions (DSA-Face), and Attention Augmented Networks (AAN-Face). Firstly, a novel spatial attention (local aggregation networks, LANet) is proposed to adaptively locate useful facial parts. Meanwhile, different facial parts may appear at different scales due to pose variations and expression changes. In order to solve this issue, LS-CNN are proposed to extract discriminative local information at different scales. Secondly, it is observed that some important facial parts may be neglected, if without a proper guidance. Besides, hierarchical features from different layers are not fully exploited which can contain rich low-level and high-level information. To overcome these two issues, HPDA are proposed. Specifically, a diverse learning is proposed to enlarge the Euclidean distances between each two spatial attention maps, locating diverse facial parts. Besides, hierarchical bilinear pooling is adopted to effectively combine features from different layers. Thirdly, despite the decent performance of the HPDA, the Euclidean distance may not be flexible enough to control the distances between each two attention maps. Further, it is also important to assign different quality scores for various local patches because various facial parts contain information with various importance, especially for faces with heavy occlusions, large pose variations, or quality changes. The CQA-Face is proposed which mainly consists of the contrastive attention learning and quality-aware networks where the former proposes a better distance function to enlarge the distances between each two attention maps and the latter applies a graph convolutional network to effectively learn the relations among different facial parts, assigning higher quality scores for important patches and smaller values for less useful ones. Fourthly, the attention subset problem may occur where some attention maps are subsets of other attention maps. Consequently, the learned facial parts are not diverse enough to cover every facial detail, leading to inferior results. In our DSA-Face model, a new pairwise self-constrastive attention is proposed which considers the complement of subset attention maps in the loss function to address the aforementioned attention subset problem. Moreover, a attention sparsity loss is proposed to suppress the responses around noisy image regions, especially for masked faces. Lastly, in existing popular face datasets, some characteristics of facial images (e.g. frontal faces) are over-represented, while some characteristics (e.g. profile faces) are under-represented. In AAN-Face model, attention erasing is proposed to simulate various occlusion levels. Besides, attention center loss is proposed to control the responses on each attention map, guiding it to focus on the similar facial part. Our works have greatly improved the performance of cross-pose, cross-quality, cross-age, cross-modality, and masked face matching tasks
    corecore