1,137 research outputs found

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    Exploiting Graphics Processing Units for Massively Parallel Multi-Dimensional Indexing

    Get PDF
    Department of Computer EngineeringScientific applications process truly large amounts of multi-dimensional datasets. To efficiently navigate such datasets, various multi-dimensional indexing structures, such as the R-tree, have been extensively studied for the past couple of decades. Since the GPU has emerged as a new cost-effective performance accelerator, now it is common to leverage the massive parallelism of the GPU in various applications such as medical image processing, computational chemistry, and particle physics. However, hierarchical multi-dimensional indexing structures are inherently not well suited for parallel processing because their irregular memory access patterns make it difficult to exploit massive parallelism. Moreover, recursive tree traversal often fails due to the small run-time stack and cache memory in the GPU. First, we propose Massively Parallel Three-phase Scanning (MPTS) R-tree traversal algorithm to avoid the irregular memory access patterns and recursive tree traversal so that the GPU can access tree nodes in a sequential manner. The experimental study shows that MPTS R-tree traversal algorithm consistently outperforms traditional recursive R-Tree search algorithm for multi-dimensional range query processing. Next, we focus on reducing the query response time and extending n-ary multi-dimensional indexing structures - R-tree, so that a large number of GPU threads cooperate to process a single query in parallel. Because the number of submitted concurrent queries in scientific data analysis applications is relatively smaller than that of enterprise database systems and ray tracing in computer graphics. Hence, we propose a novel variant of R-trees Massively Parallel Hilbert R-Tree (MPHR-Tree), which is designed for a novel parallel tree traversal algorithm Massively Parallel Restart Scanning (MPRS). The MPRS algorithm traverses the MPHR-Tree in mostly contiguous memory access patterns without recursion, which offers more chances to optimize the parallel SIMD algorithm. Our extensive experimental results show that the MPRS algorithm outperforms the other stackless tree traversal algorithms, which are designed for efficient ray tracing in computer graphics community. Furthermore, we develop query co-processing scheme that makes use of both the CPU and GPU. In this approach, we store the internal and leaf nodes of upper tree in CPU host memory and GPU device memory, respectively. We let the CPU traverse internal nodes because the conditional branches in hierarchical tree structures often cause a serious warp divergence problem in the GPU. For leaf nodes, the GPU scans a large number of leaf nodes in parallel based on the selection ratio of a given range query. It is well known that the GPU is superior to the CPU for parallel scanning. The experimental results show that our proposed multi-dimensional range query co-processing scheme improves the query response time by up to 12x and query throughput by up to 4x compared to the state-of-the-art GPU tree traversal algorithm.ope

    Verification of the Tree-Based Hierarchical Read-Copy Update in the Linux Kernel

    Full text link
    Read-Copy Update (RCU) is a scalable, high-performance Linux-kernel synchronization mechanism that runs low-overhead readers concurrently with updaters. Production-quality RCU implementations for multi-core systems are decidedly non-trivial. Giving the ubiquity of Linux, a rare "million-year" bug can occur several times per day across the installed base. Stringent validation of RCU's complex behaviors is thus critically important. Exhaustive testing is infeasible due to the exponential number of possible executions, which suggests use of formal verification. Previous verification efforts on RCU either focus on simple implementations or use modeling languages, the latter requiring error-prone manual translation that must be repeated frequently due to regular changes in the Linux kernel's RCU implementation. In this paper, we first describe the implementation of Tree RCU in the Linux kernel. We then discuss how to construct a model directly from Tree RCU's source code in C, and use the CBMC model checker to verify its safety and liveness properties. To our best knowledge, this is the first verification of a significant part of RCU's source code, and is an important step towards integration of formal verification into the Linux kernel's regression test suite.Comment: This is a long version of a conference paper published in the 2018 Design, Automation and Test in Europe Conference (DATE

    Partial Replica Location And Selection For Spatial Datasets

    Get PDF
    As the size of scientific datasets continues to grow, we will not be able to store enormous datasets on a single grid node, but must distribute them across many grid nodes. The implementation of partial or incomplete replicas, which represent only a subset of a larger dataset, has been an active topic of research. Partial Spatial Replicas extend this functionality to spatial data, allowing us to distribute a spatial dataset in pieces over several locations. We investigate solutions to the partial spatial replica selection problems. First, we describe and develop two designs for an Spatial Replica Location Service (SRLS), which must return the set of replicas that intersect with a query region. Integrating a relational database, a spatial data structure and grid computing software, we build a scalable solution that works well even for several million replicas. In our SRLS, we have improved performance by designing a R-tree structure in the backend database, and by aggregating several queries into one larger query, which reduces overhead. We also use the Morton Space-filling Curve during R-tree construction, which improves spatial locality. In addition, we describe R-tree Prefetching(RTP), which effectively utilizes the modern multi-processor architecture. Second, we present and implement a fast replica selection algorithm in which a set of partial replicas is chosen from a set of candidates so that retrieval performance is maximized. Using an R-tree based heuristic algorithm, we achieve O(n log n) complexity for this NP-complete problem. We describe a model for disk access performance that takes filesystem prefetching into account and is sufficiently accurate for spatial replica selection. Making a few simplifying assumptions, we present a fast replica selection algorithm for partial spatial replicas. The algorithm uses a greedy approach that attempts to maximize performance by choosing a collection of replica subsets that allow fast data retrieval by a client machine. Experiments show that the performance of the solution found by our algorithm is on average always at least 91% and 93.4% of the performance of the optimal solution in 4-node and 8-node tests respectively

    A compiler level intermediate representation based binary analysis system and its applications

    Get PDF
    Analyzing and optimizing programs from their executables has received a lot of attention recently in the research community. There has been a tremendous amount of activity in executable-level research targeting varied applications such as security vulnerability analysis, untrusted code analysis, malware analysis, program testing, and binary optimizations. The vision of this dissertation is to advance the field of static analysis of executables and bridge the gap between source-level analysis and executable analysis. The main thesis of this work is scalable static binary rewriting and analysis using compiler-level intermediate representation without relying on the presence of metadata information such as debug or symbolic information. In spite of a significant overlap in the overall goals of several source-code methods and executables-level techniques, several sophisticated transformations that are well-understood and implemented in source-level infrastructures have yet to become available in executable frameworks. It is a well known fact that a standalone executable without any meta data is less amenable to analysis than the source code. Nonetheless, we believe that one of the prime reasons behind the limitations of existing executable frameworks is that current executable frameworks define their own intermediate representations (IR) which are significantly more constrained than an IR used in a compiler. Intermediate representations used in existing binary frameworks lack high level features like abstract stack, variables, and symbols and are even machine dependent in some cases. This severely limits the application of well-understood compiler transformations to executables and necessitates new research to make them applicable. In the first part of this dissertation, we present techniques to convert the binaries to the same high-level intermediate representation that compilers use. We propose methods to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable to an abstract stack. The proposed methods are practical since they do not employ symbolic, relocation, or debug information which are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called \emph{SecondWrite} that uses LLVM as the IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, including several real world programs. In the next part of this work, we demonstrate that several well-known source-level analysis frameworks such as symbolic analysis have limited effectiveness in the executable domain since executables typically lack higher-level semantics such as program variables. The IR should have a precise memory abstraction for an analysis to effectively reason about memory operations. Our first work of recovering a compiler-level representation addresses this limitation by recovering several higher-level semantics information from executables. In the next part of this work, we propose methods to handle the scenarios when such semantics cannot be recovered. First, we propose a hybrid static-dynamic mechanism for recovering a precise and correct memory model in executables in presence of executable-specific artifacts such as indirect control transfers. Next, the enhanced memory model is employed to define a novel symbolic analysis framework for executables that can perform the same types of program analysis as source-level tools. Frameworks hitherto fail to simultaneously maintain the properties of correct representation and precise memory model and ignore memory-allocated variables while defining symbolic analysis mechanisms. We exemplify that our framework is robust, efficient and it significantly improves the performance of various traditional analyses like global value numbering, alias analysis and dependence analysis for executables. Finally, the underlying representation and analysis framework is employed for two separate applications. First, the framework is extended to define a novel static analysis framework, \emph{DemandFlow}, for identifying information flow security violations in program executables. Unlike existing static vulnerability detection methods for executables, DemandFlow analyzes memory locations in addition to symbols, thus improving the precision of the analysis. DemandFlow proposes a novel demand-driven mechanism to identify and precisely analyze only those program locations and memory accesses which are relevant to a vulnerability, thus enhancing scalability. DemandFlow uncovers six previously undiscovered format string and directory traversal vulnerabilities in popular ftp and internet relay chat clients. Next, the framework is extended to implement a platform-specific optimization for embedded processors. Several embedded systems provide the facility of locking one or more lines in the cache. We devise the first method in literature that employs instruction cache locking as a mechanism for improving the average-case run-time of general embedded applications. We demonstrate that the optimal solution for instruction cache locking can be obtained in polynomial time. Since our scheme is implemented inside a binary framework, it successfully addresses the portability concern by enabling the implementation of cache locking at the time of deployment when all the details of the memory hierarchy are available

    CROSS-LAYER CUSTOMIZATION FOR LOW POWER AND HIGH PERFORMANCE EMBEDDED MULTI-CORE PROCESSORS

    Get PDF
    Due to physical limitations and design difficulties, computer processor architecture has shifted to multi-core and even many-core based approaches in recent years. Such architectures provide potentials for sustainable performance scaling into future peta-scale/exa-scale computing platforms, at affordable power budget, design complexity, and verification efforts. To date, multi-core processor products have been replacing uni-core processors in almost every market segment, including embedded systems, general-purpose desktops and laptops, and super computers. However, many issues still remain with multi-core processor architectures that need to be addressed before their potentials could be fully realized. People in both academia and industry research community are still seeking proper ways to make efficient and effective use of these processors. The issues involve hardware architecture trade-offs, the system software service, the run-time management, and user application design, which demand more research effort into this field. Due to the architectural specialties with multi-core based computers, a Cross-Layer Customization framework is proposed in this work, which combines application specific information and system platform features, along with necessary operating system service support, to achieve exceptional power and performance efficiency for targeted multi-core platforms. Several topics are covered with specific optimization goals, including snoop cache coherence protocol, inter-core communication for producer-consumer applications, synchronization mechanisms, and off-chip memory bandwidth limitations. Analysis of benchmark program execution with conventional mechanisms is made to reveal the overheads in terms of power and performance. Specific customizations are proposed to eliminate such overheads with support from hardware, system software, compiler, and user applications. Experiments show significant improvement on system performance and power efficiency

    Guided Testing of Concurrent Programs Using Value Schedules

    Get PDF
    Testing concurrent programs remains a difficult task due to the non-deterministic nature of concurrent execution. Many approaches have been proposed to tackle the complexity of uncovering potential concurrency bugs. Static analysis tackles the problem by analyzing a concurrent program looking for situations/patterns that might lead to possible errors during execution. In general, static analysis cannot precisely locate all possible concurrent errors. Dynamic testing examines and controls a program during its execution also looking for situations/patterns that might lead to possible errors during execution. In general, dynamic testing needs to examine all possible execution paths to detect all errors, which is intractable. Motivated by these observation, a new testing technique is developed that uses a collaboration between static analysis and dynamic testing to find the first potential error but using less time and space. In the new collaboration scheme, static analysis and dynamic testing interact iteratively throughout the testing process. Static analysis provides coarse-grained flow-information to guide the dynamic testing through the relevant search space, while dynamic testing collects concrete runtime-information during the guided exploration. The concrete runtime-information provides feedback to the static analysis to refine its analysis, which is then feed forward to provide more precise guidance of the dynamic testing. The new collaborative technique is able to uncover the first concurrency-related bug in a program faster using less storage than the state-of-the-art dynamic testing-tool Java PathFinder. The implementation of the collaborative technique consists of a static-analysis module based on Soot and a dynamic-analysis module based on Java PathFinder

    Flow- and context-sensitive points-to analysis using generalized points-to graphs

    Get PDF
    © Springer-Verlag GmbH Germany 2016. Bottom-up interprocedural methods of program analysis construct summary flow functions for procedures to capture the effect of their calls and have been used effectively for many analyses. However, these methods seem computationally expensive for flow- and context- sensitive points-to analysis (FCPA) which requires modelling unknown locations accessed indirectly through pointers. Such accesses are com- monly handled by using placeholders to explicate unknown locations or by using multiple call-specific summary flow functions. We generalize the concept of points-to relations by using the counts of indirection levels leaving the unknown locations implicit. This allows us to create sum- mary flow functions in the form of generalized points-to graphs (GPGs) without the need of placeholders. By design, GPGs represent both mem- ory (in terms of classical points-to facts) and memory transformers (in terms of generalized points-to facts). We perform FCPA by progressively reducing generalized points-to facts to classical points-to facts. GPGs distinguish between may and must pointer updates thereby facilitating strong updates within calling contexts. The size of GPGs is linearly bounded by the number of variables and is independent of the number of statements. Empirical measurements on SPEC benchmarks show that GPGs are indeed compact in spite of large procedure sizes. This allows us to scale FCPA to 158 kLoC using GPGs (compared to 35 kLoC reported by liveness-based FCPA). Thus GPGs hold a promise of efficiency and scalability for FCPA without compro- mising precision

    On the limits of probabilistic timing analysis

    Get PDF
    Over the last years, we are witnessing the steady and rapid growth of Critica! Real-Time Embedded Systems (CRTES) industries, such as automotive and aerospace. Many of the increasingly-complex CRTES' functionalities that are currently implemented with mechanical means are moving towards to an electromechanical implementation controlled by critica! software. This trend results in a two-fold consequence. First, the size and complexity of critical-software increases in every new embedded product. And second, high-performance hardware features like caches are more frequently used in real-time processors. The increase in complexity of CRTES challenges the validation and verification process, a necessary step to certify that the system is safe for deployment. Timing validation and verification includes the computation of the Worst-Case Execution Time (WCET) estimates, which need to be trustworthy and tight. Traditional timing analysis are challenged by the use of complex hardware/software, resulting in low-quality WCET estimates, which tend to add significant pessimism to guarantee estimates' trustworthiness. This calls for new solutions that help tightening WCET estimates in a safe manner. In this Thesis, we investigate the novel Measurement-Based Probabilistic Timing Analysis (MBPTA), which in its original version already shows potential to deliver trustworthy and tight WCETs for tasks running on complex systems. First, we propose a methodology to assess and ensure that ali cache memory layouts, which can significantly impact WCET, have been adequately factored in the WCET estimation process. Second, we provide a solution to achieve simultaneously cache representativeness and full path coverage. This solution provides evidence proving that WCET estimates obtained are valid for ali program execution paths regardless of how code and data are laid out in the cache. Lastly, we analyse and expose the main misconceptions and pitfalls that can prevent a sound application of WCET analysis based on extreme value theory, which is used as part of MBPTA.En los últimos años, se ha podido observar un crecimiento rápido y sostenido de la industria de los sistemas embebidos críticos de tiempo real (abreviado en inglés CRTES}, como por ejemplo la industria aeronáutica o la automovilística. En un futuro cercano, muchas de las funcionalidades complejas que actualmente se están implementando a través de sistemas mecánicos en los CRTES pasarán a ser controladas por software crítico. Esta tendencia tiene dos consecuencias claras. La primera, el tamaño y la complejidad del software se incrementará en cada nuevo producto embebido que se lance al mercado. La segunda, las técnicas hardware destinadas a alto rendimiento (por ejemplo, memorias caché) serán usadas más frecuentemente en los procesadores de tiempo real. El incremento en la complejidad de los CRTES impone un reto en los procesos de validación y verificación de los procesadores, un paso imprescindible para certificar que los sistemas se pueden comercializar de forma segura. La validación y verificación del tiempo de ejecución incluye la estimación del tiempo de ejecución en el peor caso (abreviado en inglés WCET}, que debe ser precisa y certera. Desafortunadamente, los procesos tradicionales para analizar el tiempo de ejecución tienen problemas para analizar las complejas combinaciones entre el software y el hardware, produciendo estimaciones del WCET de mala calidad y conservadoras. Para superar dicha limitación, es necesario que florezcan nuevas técnicas que ayuden a proporcionar WCET más precisos de forma segura y automatizada. En esta Tesis se profundiza en la investigación referente al análisis probabilístico de tiempo de ejecución basado en medidas (abreviado en inglés MBPTA), cuyas primeras implementaciones muestran potencial para obtener un WCET preciso y certero en tareas ejecutadas en sistemas complejos. Primero, se propone una metodología para certificar que todas las distribuciones de la memoria caché, una de las estructuras más complejas de los CRTES, han sido contabilizadas adecuadamente durante el proceso de estimación del WCET. Segundo, se expone una solución para conseguir a la vez representatividad en la memoria caché y cobertura total en caminos críticos del programa. Dicha solución garantiza que la estimación WCET obtenida es válida para todos los caminos de ejecución, independientemente de como el código y los datos se guardan en la memoria caché. Finalmente, se analizan y discuten los mayores malentendidos y obstáculos que pueden prevenir la aplicabilidad del análisis de WCET basado en la teoría de valores extremos, la cual forma parte del MBPTA.Postprint (published version

    An Empirical Study of Alias Analysis Techniques

    Get PDF
    As software projects become larger and more complex, software optimization at that scale is only feasible through automated means. One such component of software optimization is alias analysis, which attempts to determine which variables in a program refer to the same area in memory, and is used to relocate instructions to improve performance without interfering with program execution. Several alias analyses have been proposed over the past few decades, with varying degrees of precision and time and space complexity, but few studies have been conducted to compare these techniques with one another, nor to measure with program data to confirm their accuracy. Normally, this is out of the scope of alias analyses because these processes are static, and can only rely upon the input source code. We address these limitations by instrumenting several benchmarks and combining their data with commonly used alias analyses to objectively measure the accuracy of those analyses. Additionally, we also gather additional program statistics to further determine which programs are the most suitable for evaluating subsequent alias analysis techniques
    corecore