10 research outputs found

    Exploiting GPU Architectures for Dynamic Invariant Mining

    Get PDF
    Dynamic mining of invariants is a class of approaches to extract logic formulas from the execution traces of a system under verification (SUV), with the purpose of expressing stable conditions in the behaviour of the SUV. The mined formulas represent likely invariants for the SUV, which certainly hold on the considered traces, but there is no guarantee that they are true in general. A large set of representative execution traces must be analysed to increase the probability that mined invariants are generally true. However, this becomes extremely time-consuming for current sequential approaches when long execution traces and large set of SUV variables are considered. To overcome this limitation, the paper presents a parallel approach for invariant mining that exploits GPU architectures for processing an execution trace composed of millions of clock cycles in few seconds

    Pattern Discovery in Colored Strings

    Full text link
    In this paper, we consider the problem of identifying patterns of interest in colored strings. A colored string is a string where each position is assigned one of a finite set of colors. Our task is to find substrings of the colored string that always occur followed by the same color at the same distance. The problem is motivated by applications in embedded systems verification, in particular, assertion mining. The goal there is to automatically find properties of the embedded system from the analysis of its simulation traces. We show that, in our setting, the number of patterns of interest is upper-bounded by O(n2)\mathcal{O}(n^2), where nn is the length of the string. We introduce a baseline algorithm, running in O(n2)\mathcal{O}(n^2) time, which identifies all patterns of interest satisfying certain minimality conditions, for all colors in the string. For the case where one is interested in patterns related to one color only, we also provide a second algorithm which runs in O(n2logn)\mathcal{O}(n^2\log n) time in the worst case but is faster than the baseline algorithm in practice. Both solutions use suffix trees, and the second algorithm also uses an appropriately defined priority queue, which allows us to reduce the number of computations. We performed an experimental evaluation of the proposed approaches over both synthetic and real-world datasets, and found that the second algorithm outperforms the first algorithm on all simulated data, while on the real-world data, the performance varies between a slight slowdown (on half of the datasets) and a speedup by a factor of up to 11.Comment: 22 pages, 5 figures, 2 tables, published in ACM Journal of Experimental Algorithmics. This is the journal version of the paper with the same title at SEA 2020 (18th Symposium on Experimental Algorithms, Catania, Italy, June 16-18, 2020

    Mangrove: an Inference-based Dynamic Invariant Mining for GPU Architectures

    Get PDF
    Likely invariants model properties that hold in operating conditions of a computing system. Dynamic mining of invariants aims at extracting logic formulas representing such properties from the system execution traces, and it is widely used for verification of intellectual property (IP) blocks. Although the extracted formulas represent likely invariants that hold in the considered traces, there is no guarantee that they are true in general for the system under verification. As a consequence, to increase the probability that the mined invariants are true in general, dynamic mining has to be performed to large sets of representative execution traces. This makes the execution-based mining process of actual IP blocks very time-consuming due to the trace lengths and to the large sets of monitored signals. This article presents extit{Mangrove}, an efficient implementation of a dynamic invariant mining algorithm for GPU architectures. Mangrove exploits inference rules, which are applied at run time to filter invariants from the execution traces and, thus, to sensibly reduce the problem complexity. Mangrove allows users to define invariant templates and, from these templates, it automatically generates kernels for parallel and efficient mining on GPU architectures. The article presents the tool, the analysis of its performance, and its comparison with the best sequential and parallel implementations at the state of the art

    Automatic extraction of assertions from execution traces of behavioural models

    No full text
    Several approaches exist for specification mining of hardware designs. Most of them work at RTL and they extract assertions in the form of temporal relations between Boolean variables. Other approaches work at system level (e.g., TLM) to mine assertions that specify the behaviour of the communication protocol. However, these techniques do not generate assertions addressing the design functionality. Thus, there is a lack of studies related to the automatic mining of assertions for capturing the functionality of behavioural models, where logic expressions among more abstracted (e.g., numeric) variables than bits and bit vectors are necessary.This paper is intended to fill in the gap, by proposing a tool for automatic extraction of temporal assertions from execution traces of behavioural models by adopting a mix of static and dynamic techniques

    Algorithms and Data Structures for Coding, Indexing, and Mining of Sequential Data

    Get PDF
    In recent years, the production of sequential data has been rapidly increasing. This requires solving challenging problems about how to represent information, how to retrieve information, and how to extract knowledge, from sequential data. These questions belong to the areas of coding, indexing, and mining, respectively. In this thesis, we investigate problems from those three areas. Coding refers to the way in which information is represented. Coding aims at generating optimal codes, that are codes having a minimum expected length. Codes can be generated for different purposes, from data compression to error detection/correction. The Lempel-Ziv 77 parsing produces an asymptotically optimal code in terms of compression. We study algorithms to efficiently decompress strings from the Lempel-Ziv 77 parsing, using memory proportional to the size of the parsing itself. We provide the first implementation of an algorithm by Bille et al., the only work we are aware of on this problem. We present a practical evaluation of this approach and several optimizations which improve the performance on all datasets we tested. Through the Ulam-R{'e}nyi game, it is possible to provide optimal adaptive error-correcting codes. The game consists of discovering an unknown mm-bit number by asking membership questions the answers to which can be erroneous. Questions are formulated knowing the answers to all previous ones. We want to find an optimal strategy, i.e., a strategy that can identify any mm-bit number using the theoretical minimum number of questions. We studied the case where questions are a union of up to a fixed number of intervals, and up to three answers can be erroneous. We first show that for any sufficiently large mm, there exists a strategy to identify an initially unknown mm-bit number which uses at most four intervals per question. We further refine our main tool to turn the above asymptotic result into a complete characterization of those instances of the Ulam-R{'e}nyi game that admit optimal strategies. Indexing refers to the way in which information is retrieved. An index for texts permits finding all occurrences of any substring, without traversing the whole text. Many applications require to look for approximate substrings. One of these is the problem of jumbled pattern matching, where two strings match if one is a permutation of the other. We study combinatorial aspects of prefix normal words, a class of binary words introduced in this context. These words can be used as indices for the Indexed Binary Jumbled Pattern Matching problem. We present a new recursive generation algorithm for prefix normal words that is competitive with the previous one but allows to list all prefix normal words sharing the same prefix. This sheds lights on novel insights that may help solving the problem of counting the number of prefix normal words of a given length. We then introduce infinite prefix normal words, and we show that one of the operations used by the algorithm, when repeatedly applied to extend a word, produces an infinite prefix normal word. This motivates the seeking for other operations that produce infinite prefix normal words. We found that one of these operations establishes a connection between prefix normal words and Sturmian words. We also explored the relationship between prefix normal words and Abelian complexity, as well as between prefix normal words and lexicographic order. Mining refers to the way in which information is converted into knowledge. The process of knowledge discovery covers several processing steps, including knowledge extraction. We analyze the problem of mining assertions for an embedded system from its simulation traces. This problem can be modeled as a pattern discovery problem on colored strings. We present two problems of pattern discovery on colored strings: patterns for one color only, or for all colors at the same time. We present two suffix tree-based algorithms. The first algorithm solves both the one color problem and the all colors problem. We then, introduce modifications which improve performance of the algorithm both on synthetic and on real data. We implemented and evaluated the proposed approaches, highlighting time trade-offs that can be obtained. A different way of knowledge extraction is based on the information-theoretic perspective of Pearl's model of causality. It has been postulated that the true causality direction between two phenomena A and B is related to the problem of finding the minimum entropy joint distribution between A and B. This problem is known to be NP-hard, and greedy algorithms have recently been proposed. We provide a novel analysis of one of the proposed heuristic showing that this algorithm guarantees an additive approximation of 1 bit. We then, provide a general criterion for guaranteeing an additive approximation factor of 1. This criterion may be of independent interest in other contexts where couplings are used

    System-level functional and extra-functional characterization of SoCs through assertion mining

    Get PDF
    Virtual prototyping is today an essential technology for modeling, verification, and re-design of full HW/SW platforms. This allows a fast prototyping of platforms with a higher and higher complexity, which precludes traditional verification approaches based on the static analysis of the source code. Consequently, several technologies based on the analysis of simulation traces have proposed to efficiently validate the entire system from both the functional and extra-functional point of view. From the functional point of view, different approaches based on invariant and assertion mining have been proposed in literature to validate the functionality of a system under verification (SUV). Dynamic mining of invariants is a class of approaches to extract logic formulas with the purpose of expressing stable conditions in the behavior of the SUV. The mined formulas represent likely invariants for the SUV, which certainly hold on the considered traces. A large set of representative execution traces must be analyzed to increase the probability that mined invariants are generally true. However, this is extremely time-consuming for current sequential approaches when long execution traces and large set of SUV's variables are considered. Dynamic mining of assertions is instead a class of approaches to extract temporal logic formulas with the purpose of expressing temporal relations among the variables of a SUV. However, in most cases, existing tools can only mine assertions compliant with a limited set of pre-defined templates. Furthermore, they tend to generate a huge amount of assertions, while they still lack an effective way to measure their coverage in terms of design behaviors. Moreover, the security vulnerability of a firmware running on a HW/SW platforms is becoming ever more critical in the functional verification of a SUV. Current approaches in literature focus only on raising an error as soon as an assertion monitoring the SUV fails. No approach was proposed to investigate the issue that this set of assertions could be incomplete and that different, unusual behaviors could remain not investigated. From the extra-functional point of view of a SUV, several approaches based on power state machines (PSMs) have been proposed for modeling and simulating the power consumption of an IP at system-level. However, while they focus on the use of PSMs as the underlying formalism for implementing dynamic power management techniques of a SoC, they generally do not deal with the basic problem of how to generate a PSM. In this context, the thesis aims at exploiting dynamic assertion mining to improve the current approaches for the characterization of functional and extra-functional properties of a SoC with the final goal of providing an efficient and effective system-level virtual prototyping environment. In detail, the presented methodologies focus on: efficient extraction of invariants from execution traces by exploiting GP-GPU architectures; extraction of human-readable temporal assertions by combining user-defined assertion templates, data mining and coverage analysis; generation of assertions pinpointing the unlike execution paths of a firmware to guide the analysis of the security vulnerabilities of a SoC; and last but not least, automatic generation of PSMs for the extra-functional characterization of the SoC

    High-Performance and Power-Aware Graph Processing on GPUs

    Get PDF
    Graphs are a common representation in many problem domains, including engineering, finance, medicine, and scientific applications. Different problems map to very large graphs, often involving millions of vertices. Even though very efficient sequential implementations of graph algorithms exist, they become impractical when applied on such actual very large graphs. On the other hand, graphics processing units (GPUs) have become widespread architectures as they provide massive parallelism at low cost. Parallel execution on GPUs may achieve speedup up to three orders of magnitude with respect to the sequential counterparts. Nevertheless, accelerating efficient and optimized sequential algorithms and porting (i.e., parallelizing) their implementation to such many-core architectures is a very challenging task. The task is made even harder since energy and power consumption are becoming constraints in addition, or in same case as an alternative, to performance. This work aims at developing a platform that provides (I) a library of parallel, efficient, and tunable implementations of the most important graph algorithms for GPUs, and (II) an advanced profiling model to analyze both performance and power consumption of the algorithm implementations. The platform goal is twofold. Through the library, it aims at saving developing effort in the parallelization task through a primitive-based approach. Through the profiling framework, it aims at customizing such primitives by considering both the architectural details and the target efficiency metrics (i.e., performance or power)
    corecore