30 research outputs found

    Automatically Harnessing Sparse Acceleration

    Get PDF
    Sparse linear algebra is central to many scientific programs, yet compilers fail to optimize it well. High-performance libraries are available, but adoption costs are significant. Moreover, libraries tie programs into vendor-specific software and hardware ecosystems, creating non-portable code. In this paper, we develop a new approach based on our specification Language for implementers of Linear Algebra Computations (LiLAC). Rather than requiring the application developer to (re)write every program for a given library, the burden is shifted to a one-off description by the library implementer. The LiLAC-enabled compiler uses this to insert appropriate library routines without source code changes. LiLAC provides automatic data marshaling, maintaining state between calls and minimizing data transfers. Appropriate places for library insertion are detected in compiler intermediate representation, independent of source languages. We evaluated on large-scale scientific applications written in FORTRAN; standard C/C++ and FORTRAN benchmarks; and C++ graph analytics kernels. Across heterogeneous platforms, applications and data sets we show speedups of 1.1×\times to over 10×\times without user intervention.Comment: Accepted to CC 202

    A region-based compilation technique for a Java just-in-time compiler

    No full text
    Method inlining and data flow analysis are two major optimization components for effective program transformations, however they often suffer from the existence of rarely or never executed code contained in the target method. One major problem lies in the assumption that the compilation unit is partitioned at method boundaries. This paper describes the design and implementation of a region-based compilation technique in our dynamic compilation system, in which the compiled regions are selected as code portions without rarely executed code. The key part of this technique is the region selection, partial inlining, and region exit handling. For region selection, we employ both static heuristics and dynamic profiles to identify rare sections of code. The region selection process and method inlining decision are interwoven, so that method inlining exposes other targets for region selection, while the region selection in the inline target conserves the inlining budget, leading to more method inlining. Thus the inlining process can be performed for parts of a method, not for the entire body of the method. When the program attempts to exit from a region boundary, we trigger recompilation and then rely on on-stack replacement to continue the execution from the corresponding entry point in the recompiled code. We have implemented these techniques in our Java JIT compiler, and conducted a comprehensive evaluation. The experimental results show that the approach of region-based compilation achieves approximately 5 % performance improvement on average, while reducing the compilation overhead by 20 to 30%, in comparison to the traditional functionbased compilation techniques

    Design, Implementation, and Evaluation of Optimizations in a Just-In-Time Compiler

    No full text
    The Java language incurs a runtime overhead for exception checks and object accesses without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A "JustIn -Time" (JIT) compiler generates native code from Java byte code at runtime. It must improve the runtime performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for the JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and resolution of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98 and two JavaSoft applications with byte code sizes ranging from 20000 to 280000 bytes. Each optimization contributes to an improvement in the performance of the programs. 1. Introduction Java [1] is a ..

    Design, Implementation, and Evaluation of Optimizations in a Java ™ Just-In-Time Compiler

    No full text
    The Java language incurs a runtime overhead for exception checks and object accesses, which are executed without an interior pointer in order to ensure safety. It also requires type inclusion test, dynamic class loading, and dynamic method calls in order to ensure flexibility. A “Just-In-Time ” (JIT) compiler generates native code from Java byte code at runtime. It must improve the run-time performance without compromising the safety and flexibility of the Java language. We designed and implemented effective optimizations for a JIT compiler, such as exception check elimination, common subexpression elimination, simple type inclusion test, method inlining, and devirtualization of dynamic method call. We evaluate the performance benefits of these optimizations based on various statistics collected using SPECjvm98, its candidates, and two JavaSoft applications with byte code sizes ranging from 23000 to 280000 bytes. Each optimization contributes to an improvement in the performance of the programs. 1

    Clinical Outcome by AMES Risk Definition in Japanese Differentiated Thyroid Carcinoma Patients

    Get PDF
    This study aimed to analyse whether age, metastasis, extrathyroidal invasion and size (AMES) risk definition is valuable for Japanese patients with differentiated thyroid carcinoma (DTC). Methods: Two hundred and fifteen Japanese DTC patients (43 men, 172 women; mean age, 51.0 years; mean follow-up, 102 months) treated surgically at our institutions between 1981 and 2001 were retrospectively analysed. Clinicopathological features were compared between high-risk and low-risk patients by AMES criteria. Various risk factors were also evaluated for each group of patients. Results: There were 57 high-risk and 158 low-risk patients. Recurrence and mortality rates were 43.9% and 24.6% in high-risk patients and 7.6% and 0.6% in low-risk patients, respectively (p < 0.0001). Disease-specific survival rates at 5, 10 and 15 years were 84.3%, 74.0% and 63.5% in high-risk patients and 100%, 100% and 98.3% in low-risk patients, respectively (p < 0.0001). Univariate analysis revealed that curative resection, local recurrence and distant metastasis were risk factors for mortality in the high-risk group. Multivariate analysis revealed that curative resection (hazard ratio [HR], 4.68; 95% confidence interval [CI], 1.23-17.83; p = 0.024) and distant metastasis (HR, 4.79; 95% CI, 1.24-18.40; p = 0.023) were significantly related to mortality in high-risk patients. Conclusion: AMES can identify high-risk and low-risk Japanese patients. Distant metastasis and curative resection are prognostic factors for disease-specific death
    corecore