153 research outputs found

    Best-Effort Lazy Evaluation for Python Software Built on APIs

    Get PDF
    This paper focuses on an important optimization opportunity in Python-hosted domain-specific languages (DSLs): the use of laziness for optimization, whereby multiple API calls are deferred and then optimized prior to execution (rather than executing eagerly, which would require executing each call in isolation). In existing supports of lazy evaluation, laziness is "terminated" as soon as control passes back to the host language in any way, limiting opportunities for optimization. This paper presents Cunctator, a framework that extends this laziness to more of the Python language, allowing intermediate values from DSLs like NumPy or Pandas to flow back to the host Python code without triggering evaluation. This exposes more opportunities for optimization and, more generally, allows for larger computation graphs to be built, producing 1.03-14.2X speedups on a set of programs in common libraries and frameworks

    Towards Ontology-Based Program Analysis

    Get PDF
    Program analysis is fundamental for program optimizations, debugging, and many other tasks. But developing program analyses has been a challenging and error-prone process for general users. Declarative program analysis has shown the promise to dramatically improve the productivity in the development of program analyses. Current declarative program analysis is however subject to some major limitations in supporting cooperations among analysis tools, guiding program optimizations, and often requires much effort for repeated program preprocessing. In this work, we advocate the integration of ontology into declarative program analysis. As a way to standardize the definitions of concepts in a domain and the representation of the knowledge in the domain, ontology offers a promising way to address the limitations of current declarative program analysis. We develop a prototype framework named PATO for conducting program analysis upon ontology-based program representation. Experiments on six program analyses confirm the potential of ontology for complementing existing declarative program analysis. It supports multiple analyses without separate program preprocessing, promotes cooperative Liveness analysis between two compilers, and effectively guides a data placement optimization for Graphic Processing Units (GPU)

    Study on the Influence of Ultrasonic Vibration on the Specific Energy of Sawing Ceramic

    Get PDF
    AbstractThe hard as well as brittle constituents are typically difficult-to-machined materials, and this character upsurges the machining cost. Many non-traditional machining methods were developed to improve its cost-effectiveness. Ultrasonic vibration assisted grinding has been improved the processing performance of a variety of brittle materials, and achieved good results in processing application. In this study, engineering ceramic was precisely sawn using a thin diamond blade with or without ultrasonic vibration conditions. During the sawing process, the specific sawing energy was investigated with the measurement of sawing forces to explore the influence of ultrasonic vibration. The results showed that the ultrasonic vibration made a significant reduction in specific sawing energy. The specific sawing energy decreased with the increase of the maximum undeformed chip thickness in both the sawing conditions; however ultrasonic vibration changed the trend of specific sawing energy in normal cutting mode from exponentially decreasing to a good linear decreasing. Under the ultrasonic vibration assisted sawing condition, the impact of the diamond grain on the engineering ceramic caused to much more material removal in brittle fracture mode. The reducing of the plastic transformation also reduced the energy consumption during the engineering ceramic sawing process

    Efficient Large Language Models Fine-Tuning On Graphs

    Full text link
    Learning from Text-Attributed Graphs (TAGs) has attracted significant attention due to its wide range of real-world applications. The rapid evolution of large language models (LLMs) has revolutionized the way we process textual data, which indicates a strong potential to replace shallow text embedding generally used in Graph Neural Networks (GNNs). However, we find that existing LLM approaches that exploit text information in graphs suffer from inferior computation and data efficiency. In this work, we introduce a novel and efficient approach for the end-to-end fine-tuning of Large Language Models (LLMs) on TAGs, named LEADING. The proposed approach maintains computation cost and memory overhead comparable to the graph-less fine-tuning of LLMs. Moreover, it transfers the rick knowledge in LLMs to downstream graph learning tasks effectively with limited labeled data in semi-supervised learning. Its superior computation and data efficiency are demonstrated through comprehensive experiments, offering a promising solution for a wide range of LLMs and graph learning tasks on TAGs
    • …
    corecore