396 research outputs found
Context2Name: A Deep Learning-Based Approach to Infer Natural Variable Names from Usage Contexts
Most of the JavaScript code deployed in the wild has been minified, a process
in which identifier names are replaced with short, arbitrary and meaningless
names. Minified code occupies less space, but also makes the code extremely
difficult to manually inspect and understand. This paper presents Context2Name,
a deep learningbased technique that partially reverses the effect of
minification by predicting natural identifier names for minified names. The
core idea is to predict from the usage context of a variable a name that
captures the meaning of the variable. The approach combines a lightweight,
token-based static analysis with an auto-encoder neural network that summarizes
usage contexts and a recurrent neural network that predict natural names for a
given usage context. We evaluate Context2Name with a large corpus of real-world
JavaScript code and show that it successfully predicts 47.5% of all minified
identifiers while taking only 2.9 milliseconds on average to predict a name. A
comparison with the state-of-the-art tools JSNice and JSNaughty shows that our
approach performs comparably in terms of accuracy while improving in terms of
efficiency. Moreover, Context2Name complements the state-of-the-art by
predicting 5.3% additional identifiers that are missed by both existing tools
Achieving High Performance and High Productivity in Next Generational Parallel Programming Languages
Processor design has turned toward parallelism and heterogeneity
cores to achieve performance and energy efficiency. Developers
find high-level languages attractive because they use abstraction
to offer productivity and portability over hardware complexities.
To achieve performance, some modern implementations of high-level
languages use work-stealing scheduling for load balancing of
dynamically created tasks. Work-stealing is a promising approach
for effectively exploiting software parallelism on parallel
hardware. A programmer who uses work-stealing explicitly
identifies potential parallelism and the runtime then schedules
work, keeping otherwise idle hardware busy while relieving
overloaded hardware of its burden.
However, work-stealing comes with substantial overheads. These
overheads arise as a necessary side effect of the implementation
and hamper parallel performance. In addition to runtime-imposed
overheads, there is a substantial cognitive load associated with
ensuring that parallel code is data-race free. This dissertation
explores the overheads associated with achieving high performance
parallelism in modern high-level languages.
My thesis is that, by exploiting existing underlying mechanisms
of managed runtimes; and by extending existing language design,
high-level languages will be able to deliver productivity and
parallel performance at the levels necessary for widespread
uptake.
The key contributions of my thesis are: 1) a detailed analysis of
the key sources of overhead associated with a work-stealing
runtime, namely sequential and dynamic overheads; 2) novel
techniques to reduce these overheads that use rich features of
managed runtimes such as the yieldpoint mechanism, on-stack
replacement, dynamic code-patching, exception handling support,
and return barriers; 3) comprehensive analysis of the resulting
benefits, which demonstrate that work-stealing overheads can be
significantly reduced, leading to substantial performance
improvements; and 4) a small set of language extensions that
achieve both high performance and high productivity with minimal
programmer effort.
A managed runtime forms the backbone of any modern implementation
of a high-level language. Managed runtimes enjoy the benefits of
a long history of research and their implementations are highly
optimized. My thesis demonstrates that converging these highly
optimized features together with the expressiveness of high-level
languages, gives further hope for achieving high performance and
high productivity on modern parallel hardwar
Master of Science
thesisOperating system (OS) kernel extensions, particularly device drivers, are one of the primary sources of vulnerabilities in commodity OS kernels. Vulnerabilities in driver code are often exploited by attackers, leading to attacks like privilege escalation, denial-of-service, and arbitrary code execution. Today, kernel extensions are fully trusted and operate within the core kernel without any form of isolation. But history suggests that this trust is often misplaced, emphasizing a need for some isolation in the kernel. We develop a new framework for isolating device drivers in the Linux kernel. Our work builds on three fundamental principles: (1) strong isolation of the driver code; (2) reuse of existing driver while making no or minimal changes to the source; and (3) achieving same or better performance compared to the nonisolated driver. In comparison to existing driver isolation schemes like driver virtual machines and user-level device driver implementations, our work strives to avoid modifying existing code and implements an I/O path without incurring substantial performance overhead. We demonstrate our approach by isolating a unmodified driver for a null block device in the Linux kernel, achieving near-native throughput for block sizes ranging from 512B to 256KB and outperforming the nonisolated driver for block sizes of 1MB and higher
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
This paper introduces Tiramisu, a polyhedral framework designed to generate
high performance code for multiple platforms including multicores, GPUs, and
distributed machines. Tiramisu introduces a scheduling language with novel
extensions to explicitly manage the complexities that arise when targeting
these systems. The framework is designed for the areas of image processing,
stencils, linear algebra and deep learning. Tiramisu has two main features: it
relies on a flexible representation based on the polyhedral model and it has a
rich scheduling language allowing fine-grained control of optimizations.
Tiramisu uses a four-level intermediate representation that allows full
separation between the algorithms, loop transformations, data layouts, and
communication. This separation simplifies targeting multiple hardware
architectures with the same algorithm. We evaluate Tiramisu by writing a set of
image processing, deep learning, and linear algebra benchmarks and compare them
with state-of-the-art compilers and hand-tuned libraries. We show that Tiramisu
matches or outperforms existing compilers and libraries on different hardware
architectures, including multicore CPUs, GPUs, and distributed machines.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0041
NoC Design Flow for TDMA and QoS Management in a GALS Context
International audienceThis paper proposes a new approach dealing with the tedious problem of NoC guaranteed traffics according to GALS constraints impelled by the upcoming large System-on-Chips with multiclock domains. Our solution has been designed to adjust a tradeoff between synchronous and clockless asynchronous techniques. By means of smart interfaces between synchronous sub-NoCs, Quality-of-Service (QoS) for guaranteed traffic is assured over the entire chip despite clock heterogeneity. This methodology can be easily integrated in the usual NoC design flow as an extension to traditional NoC synchronous design flows. We present real implementation obtained with our tool for a 4G telecom scheme
Too Few Bug Reports? Exploring Data Augmentation for Improved Changeset-based Bug Localization
Modern Deep Learning (DL) architectures based on transformers (e.g., BERT,
RoBERTa) are exhibiting performance improvements across a number of natural
language tasks. While such DL models have shown tremendous potential for use in
software engineering applications, they are often hampered by insufficient
training data. Particularly constrained are applications that require
project-specific data, such as bug localization, which aims at recommending
code to fix a newly submitted bug report. Deep learning models for bug
localization require a substantial training set of fixed bug reports, which are
at a limited quantity even in popular and actively developed software projects.
In this paper, we examine the effect of using synthetic training data on
transformer-based DL models that perform a more complex variant of bug
localization, which has the goal of retrieving bug-inducing changesets for each
bug report. To generate high-quality synthetic data, we propose novel data
augmentation operators that act on different constituent components of bug
reports. We also describe a data balancing strategy that aims to create a
corpus of augmented bug reports that better reflects the entire source code
base, because existing bug reports used as training data usually reference a
small part of the code base
Parallel Multi-Objective Evolutionary Algorithms: A Comprehensive Survey
Multi-Objective Evolutionary Algorithms (MOEAs) are powerful search techniques that have been extensively used to solve difficult problems in a wide variety of disciplines. However, they can be very demanding in terms of computational resources. Parallel implementations of MOEAs (pMOEAs) provide considerable gains regarding performance and scalability and, therefore, their relevance in tackling computationally expensive applications. This paper presents a survey of pMOEAs, describing a refined taxonomy, an up-to-date review of methods and the key contributions to the field. Furthermore, some of the open questions that require further research are also briefly discussed
- …