46 research outputs found
TreatJS: Higher-Order Contracts for JavaScript
TreatJS is a language embedded, higher-order contract system for JavaScript
which enforces contracts by run-time monitoring. Beyond providing the standard
abstractions for building higher-order contracts (base, function, and object
contracts), TreatJS's novel contributions are its guarantee of non-interfering
contract execution, its systematic approach to blame assignment, its support
for contracts in the style of union and intersection types, and its notion of a
parameterized contract scope, which is the building block for composable
run-time generated contracts that generalize dependent function contracts.
TreatJS is implemented as a library so that all aspects of a contract can be
specified using the full JavaScript language. The library relies on JavaScript
proxies to guarantee full interposition for contracts. It further exploits
JavaScript's reflective features to run contracts in a sandbox environment,
which guarantees that the execution of contract code does not modify the
application state. No source code transformation or change in the JavaScript
run-time system is required.
The impact of contracts on execution speed is evaluated using the Google
Octane benchmark.Comment: Technical Repor
Requirements Traceability: Recovering and Visualizing Traceability Links Between Requirements and Source Code of Object-oriented Software Systems
Requirements traceability is an important activity to reach an effective
requirements management method in the requirements engineering.
Requirement-to-Code Traceability Links (RtC-TLs) shape the relations between
requirement and source code artifacts. RtC-TLs can assist engineers to know
which parts of software code implement a specific requirement. In addition,
these links can assist engineers to keep a correct mental model of software,
and decreasing the risk of code quality degradation when requirements change
with time mainly in large sized and complex software. However, manually
recovering and preserving of these TLs puts an additional burden on engineers
and is error-prone, tedious, and costly task. This paper introduces YamenTrace,
an automatic approach and implementation to recover and visualize RtC-TLs in
Object-Oriented software based on Latent Semantic Indexing (LSI) and Formal
Concept Analysis (FCA). The originality of YamenTrace is that it exploits all
code identifier names, comments, and relations in TLs recovery process.
YamenTrace uses LSI to find textual similarity across software code and
requirements. While FCA employs to cluster similar code and requirements
together. Furthermore, YamenTrace gives a visualization of recovered TLs. To
validate YamenTrace, it applied on three case studies. The findings of this
evaluation prove the importance and performance of YamenTrace proposal as most
of RtC-TLs were correctly recovered and visualized.Comment: 17 pages, 14 figure
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
Call Graphs for Languages with Parametric Polymorphism
The performance of contemporary object oriented languages depends on optimizations such as devirtualization, inlining, and specialization, and these in turn depend on precise call graph analysis. Existing call graph analyses do not take advantage of the information provided by the rich type systems of contemporary languages, in particular generic type arguments. Many existing approaches analyze Java bytecode, in which generic types have been erased. This paper shows that this discarded information is actually very useful as the context in a context-sensitive analysis, where it significantly improves precision and keeps the running time small. Specifically, we propose and evaluate call graph construction algorithms in which the contexts of a method are (i) the type arguments passed to its type parameters, and (ii) the static types of the arguments passed to its term parameters. The use of static types from the caller as context is effective because it allows more precise dispatch of call sites inside the callee. Our evaluation indicates that the average number of contexts required per method is small. We implement the analysis in the Dotty compiler for Scala, and evaluate it on programs that use the type-parametric Scala collections library and on the Dotty compiler itself. The context-sensitive analysis runs 1.4x faster than a context-insensitive one and discovers 20\% more monomorphic call sites at the same time. When applied to method specialization, the imprecision in a context-insensitive call graph would require the average method to be cloned 22 times, whereas the context-sensitive call graph indicates a much more practical 1.00 to 1.50 clones per method
API Usage Recommendation via Multi-View Heterogeneous Graph Representation Learning
Developers often need to decide which APIs to use for the functions being
implemented. With the ever-growing number of APIs and libraries, it becomes
increasingly difficult for developers to find appropriate APIs, indicating the
necessity of automatic API usage recommendation. Previous studies adopt
statistical models or collaborative filtering methods to mine the implicit API
usage patterns for recommendation. However, they rely on the occurrence
frequencies of APIs for mining usage patterns, thus prone to fail for the
low-frequency APIs. Besides, prior studies generally regard the API call
interaction graph as homogeneous graph, ignoring the rich information (e.g.,
edge types) in the structure graph. In this work, we propose a novel method
named MEGA for improving the recommendation accuracy especially for the
low-frequency APIs. Specifically, besides call interaction graph, MEGA
considers another two new heterogeneous graphs: global API co-occurrence graph
enriched with the API frequency information and hierarchical structure graph
enriched with the project component information. With the three multi-view
heterogeneous graphs, MEGA can capture the API usage patterns more accurately.
Experiments on three Java benchmark datasets demonstrate that MEGA
significantly outperforms the baseline models by at least 19% with respect to
the Success Rate@1 metric. Especially, for the low-frequency APIs, MEGA also
increases the baselines by at least 55% regarding the Success Rate@1
PROGRAMMING LANGUAGES À LA CARTE
Code reuse in computer language development is an open research problem. Feature-oriented programming is a vision of computer programming in which features can be implemented separately, and then combined to build a variety of software products; the idea of combining feature orientation and language development is relatively recent. Many frameworks for modular language development have been proposed during the years, but, although there is a strong connection between modularity and feature-orientation development, only few of these frameworks provide primitives to combine these two concepts. This work presents a model of modular language development that is directed towards feature orientation. We describe its implementation in the Neverlang framework. The model has been evaluated through several experiences: among the others, we present a code generator for a state machine language, that we use as a means to compare to other state-of-the-art frameworks, and a JavaScript interpreter implementation that further illustrates the capabilities of our solution