37,371 research outputs found

    Towards Porting Operating Systems with Program Synthesis

    Full text link
    The end of Moore's Law has ushered in a diversity of hardware not seen in decades. Operating system (and system software) portability is accordingly becoming increasingly critical. Simultaneously, there has been tremendous progress in program synthesis. We set out to explore the feasibility of using modern program synthesis to generate the machine-dependent parts of an operating system. Our ultimate goal is to generate new ports automatically from descriptions of new machines. One of the issues involved is writing specifications, both for machine-dependent operating system functionality and for instruction set architectures. We designed two domain-specific languages: Alewife for machine-independent specifications of machine-dependent operating system functionality and Cassiopea for describing instruction set architecture semantics. Automated porting also requires an implementation. We developed a toolchain that, given an Alewife specification and a Cassiopea machine description, specializes the machine-independent specification to the target instruction set architecture and synthesizes an implementation in assembly language with a customized symbolic execution engine. Using this approach, we demonstrate successful synthesis of a total of 140 OS components from two pre-existing OSes for four real hardware platforms. We also developed several optimization methods for OS-related assembly synthesis to improve scalability. The effectiveness of our languages and ability to synthesize code for all 140 specifications is evidence of the feasibility of program synthesis for machine-dependent OS code. However, many research challenges remain; we also discuss the benefits and limitations of our synthesis-based approach to automated OS porting.Comment: ACM Transactions on Programming Languages and Systems. Accepted on August 202

    A mathematical approach towards hardware design

    Get PDF
    Today the hardware for embedded systems is often specified in VHDL. However, VHDL describes the system at a rather low level, which is cumbersome and may lead to design faults in large real life applications. There is a need of higher level abstraction mechanisms. In the embedded systems group of the University of Twente we are working on systematic and transformational methods to design hardware architectures, both multi core and single core. The main line in this approach is to start with a straightforward (often mathematical) specification of the problem. The next step is to find some adequate transformations on this specification, in particular to find specific optimizations, to be able to distribute the application over different cores. The result of these transformations is then translated into the functional programming language Haskell since Haskell is close to mathematics and such a translation often is straightforward. Besides, the Haskell code is executable, so one immediately has a simulation of the intended system. Next, the resulting Haskell specification is given to a compiler, called CĂ«aSH (for CAES LAnguage for Synchronous Hardware) which translates the specification into VHDL. The resulting VHDL is synthesizable, so from there on standard VHDL-tooling can be used for synthesis. In this work we primarily focus on streaming applications: i.e. applications that can be modeled as data-flow graphs. At the moment the CĂ«aSH system is ready in prototype form and in the presentation we will give several examples of how it can be used. In these examples it will be shown that the specification code is clear and concise. Furthermore, it is possible to use powerful abstraction mechanisms, such as polymorphism, higher order functions, pattern matching, lambda abstraction, partial application. These features allow a designer to describe circuits in a more natural and concise way than possible with the language elements found in the traditional hardware description languages. In addition we will give some examples of transformations that are possible in a mathematical specification, and which do not suffer from the problems encountered in, e.g., automatic parallelization of nested for-loops in C-programs

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    From FPGA to ASIC: A RISC-V processor experience

    Get PDF
    This work document a correct design flow using these tools in the Lagarto RISC- V Processor and the RTL design considerations that must be taken into account, to move from a design for FPGA to design for ASIC
    • …
    corecore