32 research outputs found

    Flow-Sensitive Type-Based Heap Cloning (Artifact)

    Get PDF
    This artifact contains our implementation of a new flow-sensitive type-based points-to analysis, described in "Flow-Sensitive Type-Based Heap Cloning" by Mohamad Barbar, Yulei Sui, and Shiping Chen (ECOOP 2020). This analysis performs heap cloning based on C and C++ types rather than calling contexts. Packaged as a Docker image, the artifact allows users to reproduce the claims made in the "Evaluation" section of the associated paper (Section 5.2) and to build and analyse arbitrary software

    Earning Extra Performance from Restrictive Feedbacks

    Full text link
    Many machine learning applications encounter a situation where model providers are required to further refine the previously trained model so as to gratify the specific need of local users. This problem is reduced to the standard model tuning paradigm if the target data is permissibly fed to the model. However, it is rather difficult in a wide range of practical cases where target data is not shared with model providers but commonly some evaluations about the model are accessible. In this paper, we formally set up a challenge named \emph{Earning eXtra PerformancE from restriCTive feEDdbacks} (EXPECTED) to describe this form of model tuning problems. Concretely, EXPECTED admits a model provider to access the operational performance of the candidate model multiple times via feedback from a local user (or a group of users). The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks. Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate. To enable tuning in this restrictive circumstance, we propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution. In particular, for the deep models whose parameters distribute across multiple layers, a more query-efficient algorithm is further tailor-designed that conducts layerwise tuning with more attention to those layers which pay off better. Our theoretical analyses justify the proposed algorithms from the aspects of both efficacy and efficiency. Extensive experiments on different applications demonstrate that our work forges a sound solution to the EXPECTED problem.Comment: Accepted by IEEE TPAMI in April 202

    A Tale of Two Cities: Data and Configuration Variances in Robust Deep Learning

    Full text link
    Deep neural networks (DNNs), are widely used in many industries such as image recognition, supply chain, medical diagnosis, and autonomous driving. However, prior work has shown the high accuracy of a DNN model does not imply high robustness (i.e., consistent performances on new and future datasets) because the input data and external environment (e.g., software and model configurations) for a deployed model are constantly changing. Hence, ensuring the robustness of deep learning is not an option but a priority to enhance business and consumer confidence. Previous studies mostly focus on the data aspect of model variance. In this article, we systematically summarize DNN robustness issues and formulate them in a holistic view through two important aspects, i.e., data and software configuration variances in DNNs. We also provide a predictive framework to generate representative variances (counterexamples) by considering both data and configurations for robust learning through the lens of search-based optimization

    Multi-Modal Attention Network Learning for Semantic Source Code Retrieval

    Full text link
    Code retrieval techniques and tools have been playing a key role in facilitating software developers to retrieve existing code fragments from available open-source repositories given a user query. Despite the existing efforts in improving the effectiveness of code retrieval, there are still two main issues hindering them from being used to accurately retrieve satisfiable code fragments from large-scale repositories when answering complicated queries. First, the existing approaches only consider shallow features of source code such as method names and code tokens, but ignoring structured features such as abstract syntax trees (ASTs) and control-flow graphs (CFGs) of source code, which contains rich and well-defined semantics of source code. Second, although the deep learning-based approach performs well on the representation of source code, it lacks the explainability, making it hard to interpret the retrieval results and almost impossible to understand which features of source code contribute more to the final results. To tackle the two aforementioned issues, this paper proposes MMAN, a novel Multi-Modal Attention Network for semantic source code retrieval. A comprehensive multi-modal representation is developed for representing unstructured and structured features of source code, with one LSTM for the sequential tokens of code, a Tree-LSTM for the AST of code and a GGNN (Gated Graph Neural Network) for the CFG of code. Furthermore, a multi-modal attention fusion layer is applied to assign weights to different parts of each modality of source code and then integrate them into a single hybrid representation. Comprehensive experiments and analysis on a large-scale real-world dataset show that our proposed model can accurately retrieve code snippets and outperforms the state-of-the-art methods

    Controllable Synthesis of Na3V2(PO4)3/C Nanofibers as Cathode Material for Sodium-Ion Batteries by Electrostatic Spinning

    Get PDF
    Na3V2(PO4)3/C nanofibers are prepared by a pre-reduction assisted electrospinning method. In order to maintain the perfect fibrous architecture of the Na3V2(PO4)3/C samples after calcining, a series of heat treatment parameters are studied in detail. It is found that the heat treatment process shows important influence on the morphology and electrochemical performance of Na3V2(PO4)3/C composite nanofibers. Under the calcining conditions of 800Ā°C for 10 h with a heating rate of 2.5Ā°C mināˆ’1, the well-crystallized uniform Na3V2(PO4)3/C nanofibers with excellent electrochemical performances are successfully obtained. The initial discharge specific capacities of the nanofibers at 0.05, 1, and 10C are 114.0, 106.0, and 77.9 mAh gāˆ’1, respectively. The capacity retention still remains 97.0% after 100 cycles at 0.05C. This smooth, uniform, and continuous Na3V2(PO4)3/C composite nanofibers prepared by simple electrospinning method, is expected to be a superior cathode material for sodium-ion batteries

    Value-Flow-Based Demand-Driven Pointer Analysis for C and C++

    No full text

    Query-directed adaptive heap cloning for optimizing compilers

    No full text
    Andersenā€™s pointer analysis becomes more precise when applied with full heap cloning but unscalable for large, heapintensive programs. In contrast, k-callsite-sensitive heap cloning can be faster but less precise for some programs. In this paper, we make one step forward by enhancing Andersenā€™s analysis with QUery-Directed Adaptive (QUDA) heap cloning for optimizing compilers. The novelty of our analysis, called QUDA, lies in performing k-callsite-sensitive heap cloning iteratively, starting with k = 0 (without heap cloning), so that an abstract heap object is cloned at iteration k = i + 1 only if some mayalias queries that are not answered positively at iteration k = i may now be answered more precisely. QUDA, which is implemented in Open64, has the same precision as the state-of-the-art, FULCRA, a version of QUDA with exhaustive heap cloning, but is significantly more scalable. For 10 SPEC2000 C benchmarks and 5 C applications (totalling 840 KLOC) evaluated, QUDA takes only 4+ minutes but exhaustive heap cloning takes 42+ minutes to complete. QUDA takes only 75.1 % of the time that Open64 takes on average to compile these 15 programs under ā€œ-O2ā€
    corecore