265,045 research outputs found

    A least-disruptive mechanism to compile integral and pointer types of unknown compile-time size to EFI Byte Code

    Get PDF
    Unified Extensible Firmware Interface (UEFI) is a specification which describes an interface between the operating system (OS) and the platform firmware. UEFI is a replacement for BIOS. The UEFI Specification describes how to write drivers for a device. One of the design goals in the UEFI Specification is keeping the driver images as small as possible. But the primary disadvantage of this specification is, these device drivers will be platform- and processor-dependent. In addition to the standard architecture-specific (native) device drivers, the EFI specification also provides for a processor-independent device driver environment, known as the EFI Byte Code (EBC) Virtual Machine, which enables execution of device drivers written in EFI Byte Code, independent of the platform and architecture. It follows that the target machine architecture is unknown during the compilation of an EBC image, since the compiled EBC image can potentially be run on any architecture and platform that supports the EBC Virtual Machine. In order to support writing such (EBC) code that can be executed unchanged on both 32- and 64-bit architectures, the UEFI specification describes a natural indexing mechanism, along with the introduction of a new integral type INTN, the size of which is dependent on the runtime target architecture. This implies that the size of the pointer (void*) and the newly introduced natural types (INTN/UINTN) is unknown at compile time, being dependent on the runtime architecture of the platform on which the EBC driver is executed. The above specification creates several challenges in implementation of the EBC compiler. One of the basic assumptions in any compiler implementation is that the size and alignment of all types are known at compile time, enabling the compiler to be able to generate correct object layouts, structure field offsets, array indexes, load/store alignments, and to be able to correctly compute the function call stack frame allocations, etc. Accordingly, all static language compilers such as C/C++ compilers expect the type system of the language to be well-known at compile-time, with no concept of types dependent on the runtime platform. Unfortunately, the UEFI specification defines the source language for EBC device drivers to be the C language with certain restrictions, implying that the compiler implementation for the static C language be able to handle such runtime platform dependent types. It would follow that a well-defined technique to handle pointer (void*) and natural (INTN/UINTN) types with runtime target dependent size is critical in being able to implement a C compiler with the ability to compile to (target) platform and processor independent EBC. Unfortunately, there are no documented and well-defined techniques to implement a compiler with the ability to support such types with unknown compile-time size and alignment

    C++ Templates as Partial Evaluation

    Full text link
    This paper explores the relationship between C++ templates and partial evaluation. Templates were designed to support generic programming, but unintentionally provided the ability to perform compile-time computations and code generation. These features are completely accidental, and as a result their syntax is awkward. By recasting these features in terms of partial evaluation, a much simpler syntax can be achieved. C++ may be regarded as a two-level language in which types are first-class values. Template instantiation resembles an offline partial evaluator. This paper describes preliminary work toward a single mechanism based on Partial Evaluation which unifies generic programming, compile-time computation and code generation. The language Catat is introduced to illustrate these ideas.Comment: 13 page

    engineering approach to atomic transaction verification: use of a simple object model to achieve semantics-based reasoning at compile-time

    Get PDF
    In this paper, we take an engineering approach to atomic transaction verification. We discuss the design and implementation of a verification tool that can reason about the semantics of atomic database operations. To bridge the gap between language design and automated reasoning, we make use of a simple model of objects that imitates the type-tagged memory structure of an implementation. This simple model is sufficient to describe the operational semantics of the typical features of an object-oriented database programming language, such as bounded iteration, heterogeneity, object creation, and nil values. The same model lends itself to automated reasoning with a theorem prover system. We are thus able to apply theorem prover technology to verification problems that address transaction semantics. The work has applications in the areas of transaction safety, semantics-based concurrency control, and cooperative work. The approach is illustrated by a graph editing example, with heterogeneous node structures

    Compile-Time Optimisation of Store Usage in Lazy Functional Programs

    Get PDF
    Functional languages offer a number of advantages over their imperative counterparts. However, a substantial amount of the time spent on processing functional programs is due to the large amount of storage management which must be performed. Two apparent reasons for this are that the programmer is prevented from including explicit storage management operations in programs which have a purely functional semantics, and that more readable programs are often far from optimal in their use of storage. Correspondingly, two alternative approaches to the optimisation of store usage at compile-time are presented in this thesis. The first approach is called compile-time garbage collection. This approach involves determining at compile-time which cells are no longer required for the evaluation of a program, and making these cells available for further use. This overcomes the problem of a programmer not being able to indicate explicitly that a store cell can be made available for further use. Three different methods for performing compile-time garbage collection are presented in this thesis; compile-time garbage marking, explicit deallocation and destructive allocation. Of these three methods, it is found that destructive allocation is the only method which is of practical use. The second approach to the optimisation of store usage is called compile-time garbage avoidance. This approach involves transforming programs into semantically equivalent programs which produce less garbage at compile-time. This attempts to overcome the problem of more readable programs being far from optimal in their use of storage. In this thesis, it is shown how to guarantee that the process of compile-time garbage avoidance will terminate. Both of the described approaches to the optimisation of store usage make use of the information obtained by usage counting analysis. This involves counting the number of times each value in a program is used. In this thesis, a reference semantics is defined against which the correctness of usage counting analyses can be proved. A usage counting analysis is then defined and proved to be correct with respect to this reference semantics. The information obtained by this analysis is used to annotate programs for compile-time garbage collection, and to guide the transformation when compile-time garbage avoidance is performed. It is found that compile-time garbage avoidance produces greater increases in efficiency than compile-time garbage collection, but much of the garbage which can be collected by compile-time garbage collection cannot be avoided at compile-time. The two approaches are therefore complementary, and the expressions resulting from compile-time garbage avoidance transformations can be annotated for compile-time garbage collection to further optimise the use of storage

    Compile-time meta-programming in a dynamically typed OO language.

    Get PDF
    Compile-time meta-programming allows programs to be constructed by the user at compile-time. Although LISP derived languages have long had such facilities, few modern languages are capable of compile-time meta-programming, and of those that do many of the most powerful are statically typed functional languages. In this paper I present the dynamically typed object orientated language Converge which allows compile-time meta-programming in the spirit of Template Haskell. Converge demonstrates that integrating powerful, safe compile-time meta-programming features into a dynamic language requires few restrictions to the flexible development style facilitated by the paradigm. In this paper I detail Converge’s compile-time meta-programming facilities, much of which is adapted from Template Haskell, contain several features new to the paradigm. Finally I explain how such a facility might be integrated into similar languages

    Implementation of Query Optimization for Reducing Run Time

    Get PDF
    Query optimization is the process of selecting the most efficient query-evaluation plan from many strategies so, In this paper  we have developed a technique that performs query optimization at compile-time to reduce the burden of optimization at run-time to improve the performance of the code execution. using histograms that are computed from the data and these histograms are used to get the estimate of selectivity for query joins and predicates in a query at compile-time. With these estimates, a query plan is constructed at compile-time and executed it at run-time Keywords: runtime, query optimization ,compile time histogra

    Combining Static and Dynamic Contract Checking for Curry

    Full text link
    Static type systems are usually not sufficient to express all requirements on function calls. Hence, contracts with pre- and postconditions can be used to express more complex constraints on operations. Contracts can be checked at run time to ensure that operations are only invoked with reasonable arguments and return intended results. Although such dynamic contract checking provides more reliable program execution, it requires execution time and could lead to program crashes that might be detected with more advanced methods at compile time. To improve this situation for declarative languages, we present an approach to combine static and dynamic contract checking for the functional logic language Curry. Based on a formal model of contract checking for functional logic programming, we propose an automatic method to verify contracts at compile time. If a contract is successfully verified, dynamic checking of it can be omitted. This method decreases execution time without degrading reliable program execution. In the best case, when all contracts are statically verified, it provides trust in the software since crashes due to contract violations cannot occur during program execution.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Run-time parallelization and scheduling of loops

    Get PDF
    Run time methods are studied to automatically parallelize and schedule iterations of a do loop in certain cases, where compile-time information is inadequate. The methods presented involve execution time preprocessing of the loop. At compile-time, these methods set up the framework for performing a loop dependency analysis. At run time, wave fronts of concurrently executable loop iterations are identified. Using this wavefront information, loop iterations are reordered for increased parallelism. Symbolic transformation rules are used to produce: inspector procedures that perform execution time preprocessing and executors or transformed versions of source code loop structures. These transformed loop structures carry out the calculations planned in the inspector procedures. Performance results are presented from experiments conducted on the Encore Multimax. These results illustrate that run time reordering of loop indices can have a significant impact on performance. Furthermore, the overheads associated with this type of reordering are amortized when the loop is executed several times with the same dependency structure
    corecore