9 research outputs found

    Large Language Models of Code Fail at Completing Code with Potential Bugs

    Full text link
    Large language models of code (Code-LLMs) have recently brought tremendous advances to code completion, a fundamental feature of programming assistance and code intelligence. However, most existing works ignore the possible presence of bugs in the code context for generation, which are inevitable in software development. Therefore, we introduce and study the buggy-code completion problem, inspired by the realistic scenario of real-time code suggestion where the code context contains potential bugs -- anti-patterns that can become bugs in the completed program. To systematically study the task, we introduce two datasets: one with synthetic bugs derived from semantics-altering operator changes (buggy-HumanEval) and one with realistic bugs derived from user submissions to coding problems (buggy-FixEval). We find that the presence of potential bugs significantly degrades the generation performance of the high-performing Code-LLMs. For instance, the passing rates of CodeGen-2B-mono on test cases of buggy-HumanEval drop more than 50% given a single potential bug in the context. Finally, we investigate several post-hoc methods for mitigating the adverse effect of potential bugs and find that there remains a large gap in post-mitigation performance.Comment: 25 page

    Searching Toward Pareto-Optimal Device-Aware Neural Architectures

    Full text link
    Recent breakthroughs in Neural Architectural Search (NAS) have achieved state-of-the-art performance in many tasks such as image classification and language understanding. However, most existing works only optimize for model accuracy and largely ignore other important factors imposed by the underlying hardware and devices, such as latency and energy, when making inference. In this paper, we first introduce the problem of NAS and provide a survey on recent works. Then we deep dive into two recent advancements on extending NAS into multiple-objective frameworks: MONAS and DPP-Net. Both MONAS and DPP-Net are capable of optimizing accuracy and other objectives imposed by devices, searching for neural architectures that can be best deployed on a wide spectrum of devices: from embedded systems and mobile devices to workstations. Experimental results are poised to show that architectures found by MONAS and DPP-Net achieves Pareto optimality w.r.t the given objectives for various devices.Comment: ICCAD'18 Invited Pape

    Scalable Reinforcement-Learning-Based Neural Architecture Search for Cancer Deep Learning Research

    Full text link
    Cancer is a complex disease, the understanding and treatment of which are being aided through increases in the volume of collected data and in the scale of deployed computing power. Consequently, there is a growing need for the development of data-driven and, in particular, deep learning methods for various tasks such as cancer diagnosis, detection, prognosis, and prediction. Despite recent successes, however, designing high-performing deep learning models for nonimage and nontext cancer data is a time-consuming, trial-and-error, manual task that requires both cancer domain and deep learning expertise. To that end, we develop a reinforcement-learning-based neural architecture search to automate deep-learning-based predictive model development for a class of representative cancer data. We develop custom building blocks that allow domain experts to incorporate the cancer-data-specific characteristics. We show that our approach discovers deep neural network architectures that have significantly fewer trainable parameters, shorter training time, and accuracy similar to or higher than those of manually designed architectures. We study and demonstrate the scalability of our approach on up to 1,024 Intel Knights Landing nodes of the Theta supercomputer at the Argonne Leadership Computing Facility.Comment: SC '19: IEEE/ACM International Conference on High Performance Computing, Networking, Storage and Analysis, November 17--22, 2019, Denver, C

    SHAPE REPRESENTATION VIA ELEMENTARY SYMMETRIC POLYNOMIALS: A COMPLETE INVARIANT INSPIRED BY THE BISPECTRUM

    No full text
    We address the representation of two-dimensional shape in its most general form, i.e., arbitrary sets of points, that may arise in multiple situations, e.g., sparse sets of specific landmarks, or dense sets of image edge points. Our goal are recognition tasks, where the key is balancing two contradicting demands: shapes that differ by rigid transformations or point re-labeling should have the same representation (invariance) but geometrically distinct shapes should have different representations (completeness). In the paper, we introduce a new shape representation that marries properties of the elementary symmetric polynomials and the bispectrum. Like the power spectrum, the bispectrum is insensitive to signal shifts; however, unlike the power spectrum, the bispectrum is complete. The elementary symmetric polynomials are complete and invariant to variable relabeling. We show that the elementary symmetric polynomials of the shape points depend on the shape orientation in a way that enables interpreting them in the frequency domain and building from them a bispectrum. The result is a shape representation that is complete and invariant to rigid transformations and point-relabeling. The paper also reports experiments that illustrate the proved properties

    Data on minute DNA quantification on microvolumetric solutions: comparison of mathematical models and effect of some compounds on the DNA quantification accuracy

    No full text
    This article contains data related to the research article entitled “Novel approach for accurate minute DNA quantification on microvolumetric solutions” (Carvalho et al., 2018). The combination of PicoGreen® with a microvolume fluorospectrometer is a popular DNA quantification method due to its high sensitivity and minimal consumption of sample, being commonly used to evaluate the performance of microfluidic devices designed for DNA purification. In this study, the authors present data related with the effect of DNA fragmentation level. The present data article includes the data used on the precision evaluation, in terms of repeatability, of the mathematical models developed to obtain the standards curve for salmon sperm DNA (low molecular weight). In addition, results related with the effect of some compounds on the DNA quantification accuracy using λDNA are presented
    corecore