1,982 research outputs found

    The JStar language philosophy

    Get PDF
    This paper introduces the JStar parallel programming language, which is a Java-based declarative language aimed at discouraging sequential programming, en-couraging massively parallel programming, and giving the compiler and runtime maximum freedom to try alternative parallelisation strategies. We describe the execution semantics and runtime support of the language, several optimisations and parallelism strategies, with some benchmark results

    Automatic Parallelization of Data-Driven JStar Programs

    Get PDF
    Data-driven problems have common characteristics: a large number of small objects with complex dependencies. This makes the traditional parallel programming approaches more difficult to apply as pipe-lining the task dependencies may require to rewrite or recompile the program into efficient parallel implementations. This thesis focuses on data-driven JStar programs that have rules triggered by the tuples from a bulky CSV file or from other sources of complex data, and making those programs run fast in parallel. JStar is a new declarative language for parallel programming that encourages programmers to write their applications with implicit parallelism. The thesis briefly introduces the JStar language and the implicit default parallelism of the JStar compiler. It describes the root causes of the poor performance of the naive parallel JStar programs and defines a performance tuning process to increase the speed of JStar programs as the number of cores increases and to minimize the memory usage in the Java Heap. Several graphic analysis tools were developed to allow easier analysis of bottlenecks in parallel programs. The JStar compiler and runtime were extended so that it is easy to apply a variety of optimisations to a JStar program without changing the JStar source code. This process was applied to four case studies which were benchmarked on different multi-core machines to measure the performance and scalability of JStar programs

    Preparation of nano-liposome enveloping Flos Magnoliae volatile oil

    Full text link

    Fabrication of Large-Grain Thick Polycrystalline Silicon Thin Films via Aluminum-Induced Crystallization for Application in Solar Cells

    Get PDF
    The fabrication of large-grain 1.25 μm thick polycrystalline silicon (poly-Si) films via two-stage aluminum-induced crystallization (AIC) for application in thin-film solar cells is reported. The induced 250 nm thick poly-Si film in the first stage is used as the seed layer for the crystallization of a 1 μm thick amorphous silicon (a-Si) film in the second stage. The annealing temperatures in the two stages are both 500°C. The effect of annealing time (15, 30, 60, and 120 minutes) in the second stage on the crystallization of a-Si film is investigated using X-ray diffraction (XRD), scanning electron microscopy, and Raman spectroscopy. XRD and Raman results confirm that the induced poly-Si films are induced by the proposed process

    (5-n-Hexyl-2-hydroxymethyl-1,3-dioxan-2-yl)methanol

    Get PDF
    In the title compound, C12H24O4, the dioxane ring adopts a chair conformation; the n-hexyl chain, which occupies an equatorial position, has an extended zigzag conformation. In the crystal, mol­ecules are connected by O—H⋯O hydrogen-bonds into a zigzag chain running along the b axis, giving rise to a herringbone pattern

    Efficient compilation of a verification-friendly programming language

    Get PDF
    This thesis develops a compiler to convert a program written in the verification friendly programming language Whiley into an efficient implementation in C. Our compiler uses a mixture of static analysis, run-time monitoring and a code generator to and faster integer types, eliminate unnecessary array copies and de-allocate unused memory without garbage collection, so that Whiley programs can be translated into C code to run fast and for long periods on general operating systems as well as limited-resource embedded devices. We also present manual and automatic proofs to verify memory safety of our implementations, and benchmark on a variety of test cases for practical use. Our benchmark results show that, in our test suite, our compiler effectively reduces the time complexity to the lowest possible level and stops all memory leaks without causing double-freeing problems. The performance of implementations can be further improved by choosing proper integer types within the ranges and exploiting parallelism in the programs
    corecore