64,050 research outputs found

    Using ACL2 to Verify Loop Pipelining in Behavioral Synthesis

    Get PDF
    Behavioral synthesis involves compiling an Electronic System-Level (ESL) design into its Register-Transfer Level (RTL) implementation. Loop pipelining is one of the most critical and complex transformations employed in behavioral synthesis. Certifying the loop pipelining algorithm is challenging because there is a huge semantic gap between the input sequential design and the output pipelined implementation making it infeasible to verify their equivalence with automated sequential equivalence checking techniques. We discuss our ongoing effort using ACL2 to certify loop pipelining transformation. The completion of the proof is work in progress. However, some of the insights developed so far may already be of value to the ACL2 community. In particular, we discuss the key invariant we formalized, which is very different from that used in most pipeline proofs. We discuss the needs for this invariant, its formalization in ACL2, and our envisioned proof using the invariant. We also discuss some trade-offs, challenges, and insights developed in course of the project.Comment: In Proceedings ACL2 2014, arXiv:1406.123

    Ten virtues of structured graphs

    Get PDF
    This paper extends the invited talk by the first author about the virtues of structured graphs. The motivation behind the talk and this paper relies on our experience on the development of ADR, a formal approach for the design of styleconformant, reconfigurable software systems. ADR is based on hierarchical graphs with interfaces and it has been conceived in the attempt of reconciling software architectures and process calculi by means of graphical methods. We have tried to write an ADR agnostic paper where we raise some drawbacks of flat, unstructured graphs for the design and analysis of software systems and we argue that hierarchical, structured graphs can alleviate such drawbacks

    On the Serialisation of Parallel Programs

    Get PDF

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    AT-GIS: highly parallel spatial query processing with associative transducers

    Get PDF
    Users in many domains, including urban planning, transportation, and environmental science want to execute analytical queries over continuously updated spatial datasets. Current solutions for largescale spatial query processing either rely on extensions to RDBMS, which entails expensive loading and indexing phases when the data changes, or distributed map/reduce frameworks, running on resource-hungry compute clusters. Both solutions struggle with the sequential bottleneck of parsing complex, hierarchical spatial data formats, which frequently dominates query execution time. Our goal is to fully exploit the parallelism offered by modern multicore CPUs for parsing and query execution, thus providing the performance of a cluster with the resources of a single machine. We describe AT-GIS, a highly-parallel spatial query processing system that scales linearly to a large number of CPU cores. ATGIS integrates the parsing and querying of spatial data using a new computational abstraction called associative transducers(ATs). ATs can form a single data-parallel pipeline for computation without requiring the spatial input data to be split into logically independent blocks. Using ATs, AT-GIS can execute, in parallel, spatial query operators on the raw input data in multiple formats, without any pre-processing. On a single 64-core machine, AT-GIS provides 3× the performance of an 8-node Hadoop cluster with 192 cores for containment queries, and 10× for aggregation queries

    Segue: Overviewing Evolution Patterns of Egocentric Networks by Interactive Construction of Spatial Layouts

    Full text link
    Getting the overall picture of how a large number of ego-networks evolve is a common yet challenging task. Existing techniques often require analysts to inspect the evolution patterns of ego-networks one after another. In this study, we explore an approach that allows analysts to interactively create spatial layouts in which each dot is a dynamic ego-network. These spatial layouts provide overviews of the evolution patterns of ego-networks, thereby revealing different global patterns such as trends, clusters and outliers in evolution patterns. To let analysts interactively construct interpretable spatial layouts, we propose a data transformation pipeline, with which analysts can adjust the spatial layouts and convert dynamic egonetworks into event sequences to aid interpretations of the spatial positions. Based on this transformation pipeline, we developed Segue, a visual analysis system that supports thorough exploration of the evolution patterns of ego-networks. Through two usage scenarios, we demonstrate how analysts can gain insights into the overall evolution patterns of a large collection of ego-networks by interactively creating different spatial layouts.Comment: Published at IEEE Conference on Visual Analytics Science and Technology (IEEE VAST 2018

    Symbolic QED Pre-silicon Verification for Automotive Microcontroller Cores: Industrial Case Study

    Full text link
    We present an industrial case study that demonstrates the practicality and effectiveness of Symbolic Quick Error Detection (Symbolic QED) in detecting logic design flaws (logic bugs) during pre-silicon verification. Our study focuses on several microcontroller core designs (~1,800 flip-flops, ~70,000 logic gates) that have been extensively verified using an industrial verification flow and used for various commercial automotive products. The results of our study are as follows: 1. Symbolic QED detected all logic bugs in the designs that were detected by the industrial verification flow (which includes various flavors of simulation-based verification and formal verification). 2. Symbolic QED detected additional logic bugs that were not recorded as detected by the industrial verification flow. (These additional bugs were also perhaps detected by the industrial verification flow.) 3. Symbolic QED enables significant design productivity improvements: (a) 8X improved (i.e., reduced) verification effort for a new design (8 person-weeks for Symbolic QED vs. 17 person-months using the industrial verification flow). (b) 60X improved verification effort for subsequent designs (2 person-days for Symbolic QED vs. 4-7 person-months using the industrial verification flow). (c) Quick bug detection (runtime of 20 seconds or less), together with short counterexamples (10 or fewer instructions) for quick debug, using Symbolic QED

    New Generation of Educators Initiative: Reform Focus at Comprehensive Grant Sites

    Get PDF
    This first analysis of the early NGEI work at comprehensive grant campuses shows that collectively campuses are working across points on the pipeline to address the need for teachers who are better prepared to effectively teach to the new standards. While the bulk of the NGEI reform efforts are targeted at teacher preparation program reform, we see NGEI campuses reaching as far back as high school to cultivate early interest in, and preparedness for, teaching in response to local conditions such as limited candidate pools.Within teacher preparation, the early NGEI work of campuses is primarily clustered around the reform of the teacher preparation program coursework and clinical work (reflecting the first and third Key Transformation Elements). Partnerships with districts are at various stages of development and, in several cases, are focused primarily at the school level. A few campuses are reforming the formative feedback process for candidates through their NGEI work (Element 4). Work with district partners on the identification of the key skills, knowledge, and dispositions of well-prepared new teachers (Element 2) and work on continuous improvement based on data on candidates and program completers (Element 5) are less prominent in the NGEI work to date.As campuses clear the hurdle of launching their reforms in the summer and fall and look toward the next phase of NGEI funding, the evaluation (WestEd/SRI) and the facilitation (ConnectEd) teams are poised to provide support to grantees on the Key Transformation Elements that are not yet fully developed across all comprehensive sites, that is:* Partnerships with K–12 district partners to align programming as much as possible.* Shared understandings with K–12 district partners about the key knowledge, skills, and dispositions of a well-prepared new teacher that are used to inform teacher preparation program elements.* Feedback to candidates on their mastery of prioritized skills during preparation.* Data on candidate progress toward mastery of identified knowledge and practices during their training and after program completion.Specifically, ConnectEd is available to assist with implementation coaching and support for comprehensive campus teams and can support the work with K–12 partners.In addition to providing ongoing formative evaluation work across the comprehensive grant sites, the WestEd/SRI team can provide technical support for grantees to assist with the development of high-quality data on candidate progress toward mastery of identified knowledge and practices during their training and after program completion. The data inventories that the evaluation team developed for each campus show that there are opportunities to: a) enhance the quality of existing data, b) improve access to those data, and c) develop new data sources targeted toward the measurement of prioritized skills and knowledge for formative feedback to candidates. In the coming months, the evaluation team will also be seeking opportunities to bridge the system-level work described above in Box 1 with campus efforts to strengthen systems for continuous improvement.
    • …
    corecore