382,872 research outputs found
A flexible model for dynamic linking in Java and C#
Dynamic linking supports flexible code deployment, allowing partially linked code to link further code on the fly, as needed.
Thus, end-users enjoy the advantage of automatically receiving any updates, without any need for any explicit actions on their side,
such as re-compilation, or re-linking. On the down side, two executions of a program may link in different versions of code, which
in some cases causes subtle errors, and may mystify end-users.
Dynamic linking in Java and C# are similar: the same linking phases are involved, soundness is based on similar ideas, and
executions which do not throw linking errors give the same result. They are, however, not identical: the linking phases are combined
differently, and take place in different order. Consequently, linking errors may be detected at different times by Java and C# runtime
systems.
We develop a non-deterministic model, which describes the behaviour of both Java and C# program executions. The nondeterminism
allows us to describe the design space, to distill the similarities between the two languages, and to use one proof of
soundness for both. We also prove that all execution strategies are equivalent with respect to terminating executions that do not
throw link errors: they give the same results
Recovering Trace Links Between Software Documentation And Code
Introduction Software development involves creating various artifacts at different levels of abstraction and establishing relationships between them is essential. Traceability link recovery (TLR) automates this process, enhancing software quality by aiding tasks like maintenance and evolution. However, automating TLR is challenging due to semantic gaps resulting from different levels of abstraction. While automated TLR approaches exist for requirements and code, architecture documentation lacks tailored solutions, hindering the preservation of architecture knowledge and design decisions.
Methods This paper presents our approach TransArC for TLR between architecture documentation and code, using componentbased architecture models as intermediate artifacts to bridge the semantic gap. We create transitive trace links by combining the existing approach ArDoCo for linking architecture documentation to models with our novel approach ArCoTL for linking architecture models to code.
Results We evaluate our approaches with five open-source projects, comparing our results to baseline approaches. The model-to-code TLR approach achieves an average F1-score of 0.98, while the documentation-to-code TLR approach achieves a promising average F1-score of 0.82, significantly outperforming baselines.
Conclusion Combining two specialized approaches with an intermediate artifact shows promise for bridging the semantic gap. In future research, we will explore further possibilities for such transitive approaches
Recovering Trace Links Between Software Documentation And Code
Introduction Software development involves creating various artifacts at different levels of abstraction and establishing relationships between them is essential. Traceability link recovery (TLR) automates this process, enhancing software quality by aiding tasks like maintenance and evolution. However, automating TLR is challenging due to semantic gaps resulting from different levels of abstraction. While automated TLR approaches exist for requirements and code, architecture documentation lacks tailored solutions, hindering the preservation of architecture knowledge and design decisions.
Methods This paper presents our approach TransArC for TLR between architecture documentation and code, using componentbased architecture models as intermediate artifacts to bridge the semantic gap. We create transitive trace links by combining the existing approach ArDoCo for linking architecture documentation to models with our novel approach ArCoTL for linking architecture models to code.
Results We evaluate our approaches with five open-source projects, comparing our results to baseline approaches. The model-to-code TLR approach achieves an average F1-score of 0.98, while the documentation-to-code TLR approach achieves a promising average F1-score of 0.82, significantly outperforming baselines.
Conclusion Combining two specialized approaches with an intermediate artifact shows promise for bridging the semantic gap. In future research, we will explore further possibilities for such transitive approaches
Recommended from our members
Robusta: Taming the Native Beast of the JVM
Java applications often need to incorporate native-code components for efficiency and for reusing legacy code. However, it is well known that the use of native code defeats Java's security model. We describe the design and implementation of Robusta, a complete framework that provides safety and security to native code in Java applications. Starting from software-based fault isolation (SFI), Robusta isolates native code into a sandbox where dynamic linking/loading of libraries is supported and unsafe system modification and confidentiality violations are prevented. It also mediates native system calls according to a security policy by connecting to Java's security manager. Our prototype implementation of Robusta is based on Native Client and OpenJDK. Experiments in this prototype demonstrate Robusta is effective and efficient, with modest runtime overhead on a set of JNI benchmark programs. Robusta can be used to sandbox native libraries used in Java's system classes to prevent attackers from exploiting bugs in the libraries. It can also enable trustworthy execution of mobile Java programs with native libraries. The design of Robusta should also be applicable when other type-safe languages (e.g., C#, Python) want to ensure safe interoperation with native libraries.Engineering and Applied Science
Verilog-A Compact Semiconductor Device Modelling and Circuit Macromodelling with the QucsStudio-ADMS "Turn-Key" Modelling System
The Verilog-A “Analogue Device Model Synthesizer” (ADMS) has in recent years become an established modelling tool for GNU General Public License circuit simulator development. Qucs and ngspice being two examples of open source circuit simulators that use ADMS. This paper presents a “turn-key” compact device modelling and circuit macromodelling system based on ADMS and implemented in the QucsStudio circuit design, simulation and manufacturing environment. A core feature of the new system is a modelling procedure which does not require users to manually patch the circuit simulator C++ code. At the start of a QucsStudio simulation the software automatically detects any changes in Verilog-A model code, re-compiling and dynamically linking the modified code to the body of the QucsStudio code. The inherent flexibility of the “turn-key” system encourages rapid experimentation with analogue and RF compact device models. In this paper QucsStudio “turn-key” modelling is illustrated by the design of a single stage RF amplifier circuit
Enhanced modeling features within TREETOPS
The original motivation for TREETOPS was to build a generic multi-body simulation and remove the burden of writing multi-body equations from the engineers. The motivation of the enhancement was twofold: (1) to extend the menu of built-in features (sensors, actuators, constraints, etc.) that did not require user code; and (2) to extend the control system design capabilities by linking with other government funded software (NASTRAN and MATLAB). These enhancements also serve to bridge the gap between structures and control groups. It is common on large space programs for the structures groups to build hi-fidelity models of the structure using NASTRAN and for the controls group to build lower order models because they lack the tools to incorporate the former into their analysis. Now the controls engineers can accept the hi-fidelity NASTRAN models into TREETOPS, add sensors and actuators, perform model reduction and couple the result directly into MATLAB to perform their design. The controller can then be imported directly into TREETOPS for non-linear, time-history simulation
Link-INVENT: generative linker design with reinforcement learning
In this work, we present Link-INVENT as an extension to the existing de novo molecular design platform REINVENT. We provide illustrative examples on how Link-INVENT can be applied to fragment linking, scaffold hopping, and PROTAC design case studies where the desirable molecules should satisfy a combination of different criteria. With the help of reinforcement learning, the agent used by Link-INVENT learns to generate favourable linkers connecting molecular subunits that satisfy diverse objectives, facilitating practical application of the model for real-world drug discovery projects. We also introduce a range of linker-specific objectives in the Scoring Function of REINVENT. The code is freely available at https://github.com/MolecularAI/Reinvent
Exploranative Code Quality Documents
Good code quality is a prerequisite for efficiently developing maintainable
software. In this paper, we present a novel approach to generate exploranative
(explanatory and exploratory) data-driven documents that report code quality in
an interactive, exploratory environment. We employ a template-based natural
language generation method to create textual explanations about the code
quality, dependent on data from software metrics. The interactive document is
enriched by different kinds of visualization, including parallel coordinates
plots and scatterplots for data exploration and graphics embedded into text. We
devise an interaction model that allows users to explore code quality with
consistent linking between text and visualizations; through integrated
explanatory text, users are taught background knowledge about code quality
aspects. Our approach to interactive documents was developed in a design study
process that included software engineering and visual analytics experts.
Although the solution is specific to the software engineering scenario, we
discuss how the concept could generalize to multivariate data and report
lessons learned in a broader scope.Comment: IEEE VIS VAST 201
Model Style Guidelines for Embedded Code Generation
International audienceEmbedded systems are increasingly being developed using models. These models may have started with the system engineer or algorithm developer as an executable specification or algorithm description. However, these models also now serve as the entry point for software engineering, thanks to automatic embedded code generation. As a result, software engineers want to take advantage of these same models, adding constraints on system behavior; describing characteristics that are needed for implementation, such as fixed-point details; or linking components in the design to relevant parts of requirements and specification documents. This paper describes model style guidelines for automatically generating fixed-point and floating-point code for embedded systems. The guidelines are based on best practices and techniques derived from actual industry examples in aerospace and automotive companies worldwide
Privacy-Preserving Reengineering of Model-View-Controller Application Architectures Using Linked Data
When a legacy system’s software architecture cannot be redesigned, implementing
additional privacy requirements is often complex, unreliable and
costly to maintain. This paper presents a privacy-by-design approach to
reengineer web applications as linked data-enabled and implement access
control and privacy preservation properties. The method is based on the
knowledge of the application architecture, which for the Web of data is
commonly designed on the basis of a model-view-controller pattern. Whereas
wrapping techniques commonly used to link data of web applications duplicate
the security source code, the new approach allows for the controlled
disclosure of an application’s data, while preserving non-functional properties
such as privacy preservation. The solution has been implemented
and compared with existing linked data frameworks in terms of reliability,
maintainability and complexity
- …