200 research outputs found
Type Regression Testing to Detect Breaking Changes in Node.js Libraries
The npm repository contains JavaScript libraries that are used by millions of software developers. Its semantic versioning system relies on the ability to distinguish between breaking and non-breaking changes when libraries are updated. However, the dynamic nature of JavaScript often causes unintended breaking changes to be detected too late, which undermines the robustness of the applications.
We present a novel technique, type regression testing, to automatically determine whether an update of a library implementation affects the types of its public interface, according to how the library is being used by other npm packages. By leveraging available test suites of clients, type regression testing uses a dynamic analysis to learn models of the library interface. Comparing the models before and after an update effectively amplifies the existing tests by revealing changes that may affect the clients.
Experimental results on 12 widely used libraries show that the technique can identify type-related breaking changes with high accuracy. It fully automatically classifies at least 90% of the updates correctly as either major or as minor or patch, and it detects 26 breaking changes among the minor and patch updates
Type Regression Testing to Detect Breaking Changes in Node.js Libraries (Artifact)
This artifact provides an implementation of a novel technique, type regression testing, to automatically determine whether an update of a npm library implementation affects the types of its public interface, according to how the library is being used by other npm packages. Type regression testing is implemented in the tool NoRegrets. A run of NoRegrets is parameterized with a pre-update and post-update version of the library, and it consists of three fully automatic phases. First, NoRegrets fetches a list of clients that depend upon the pre-update library, and that have a test suite that succeeds on the pre-update version. Second, NoRegrets uses an ECMAScript 6 proxy instrumentation to generate the API model of both the pre-update and post-update libraries, based on observations of how the client test suites interact with the library. Third, the two models are compared, and inconsistencies are reported as type regressions.
This artifact contains the source code and an installation of NoRegrets, with a guide for how to use the tool and reproduce the experimental results presented in the paper
A Large-Scale Empirical Study on Semantic Versioning in Golang Ecosystem
Third-party libraries (TPLs) have become an essential component of software,
accelerating development and reducing maintenance costs. However, breaking
changes often occur during the upgrades of TPLs and prevent client programs
from moving forward. Semantic versioning (SemVer) has been applied to
standardize the versions of releases according to compatibility, but not all
releases follow SemVer compliance. Lots of work focuses on SemVer compliance in
ecosystems such as Java and JavaScript beyond Golang (Go for short). Due to the
lack of tools to detect breaking changes and dataset for Go, developers of TPLs
do not know if breaking changes occur and affect client programs, and
developers of client programs may hesitate to upgrade dependencies in terms of
breaking changes.
To bridge this gap, we conduct the first large-scale empirical study in the
Go ecosystem to study SemVer compliance in terms of breaking changes and their
impact. In detail, we purpose GoSVI (Go Semantic Versioning Insight) to detect
breaking changes and analyze their impact by resolving identifiers in client
programs and comparing their types with breaking changes. Moreover, we collect
the first large-scale Go dataset with a dependency graph from GitHub, including
124K TPLs and 532K client programs. Based on the dataset, our results show that
86.3% of library upgrades follow SemVer compliance and 28.6% of no-major
upgrades introduce breaking changes. Furthermore, the tendency to comply with
SemVer has improved over time from 63.7% in 2018/09 to 92.2% in 2023/03.
Finally, we find 33.3% of downstream client programs may be affected by
breaking changes. These findings provide developers and users of TPLs with
valuable insights to help make decisions related to SemVer.Comment: 11 pages, 4 figure
Putting the Semantics into Semantic Versioning
The long-standing aspiration for software reuse has made astonishing strides
in the past few years. Many modern software development ecosystems now come
with rich sets of publicly-available components contributed by the community.
Downstream developers can leverage these upstream components, boosting their
productivity.
However, components evolve at their own pace. This imposes obligations on and
yields benefits for downstream developers, especially since changes can be
breaking, requiring additional downstream work to adapt to. Upgrading too late
leaves downstream vulnerable to security issues and missing out on useful
improvements; upgrading too early results in excess work. Semantic versioning
has been proposed as an elegant mechanism to communicate levels of
compatibility, enabling downstream developers to automate dependency upgrades.
While it is questionable whether a version number can adequately characterize
version compatibility in general, we argue that developers would greatly
benefit from tools such as semantic version calculators to help them upgrade
safely. The time is now for the research community to develop such tools: large
component ecosystems exist and are accessible, component interactions have
become observable through automated builds, and recent advances in program
analysis make the development of relevant tools feasible. In particular,
contracts (both traditional and lightweight) are a promising input to semantic
versioning calculators, which can suggest whether an upgrade is likely to be
safe.Comment: to be published as Onward! Essays 202
Set up of automated user interface testing system
Automation has evolved dramatically in the past decade, one aspect of this being software automation testing. The system used in this thesis is Nightwatch.js which is a Node.js-based framework solution for web applications and websites. With the demonstration conducted in this paper, the author aims to specify the vital role of automation testing framework in the software industry.
The commissioning party was Quux Oy, a software company located in Valkeakoski (Finland). The testing in the company are still done manually and there is an imperative need for the setup of an automated user interface testing system.
The thesis project included a theoretical review using online sources such as articles, forums, electronic sources, and the main website of the Nightwatch itself. The theory was focused on the definition of automation and on defining Nightwatch.js system along with all the required features to set up the system. The implementation part was where the whole set up process was being examined and recorded.
The outcome of the thesis project is a test suite where all the components required to be tested were covered. The targets of the thesis were achieved, with a justification for using this testing system in Quux Oy and its whole set up process
Application to Security Testing
In a world where software gradually plays a key role daily, a failure may bring unpleasant
consequences for its users. An example of a serious failure was the case Apple
iCloud security exploit in 2014 where several private photos of celebrities have been
accessed without permission[icl14a][icl14b]. Apart from economic and commercial implications,
these faults lead to loss of trust in software by users, thus leading to the
consequent search for an alternative and even result in leaving the old software for a
new alternative. To address these shortcomings, the software industry started to use
software testing to make sure that the software contains the minimum possible failures
before is deployment.
Software tests are used to analyse the program, namely to search some bugs. This
analysis can be done without program execution (static analysis) or during execution
(dynamic analysis). Static analysis tools can be used to check for potential execution of
the program that have not been prematurely aborted due to unexpected event at runtime,
not ensuring that the program will display the correct result. We studied some
static analysis tools, JSFlow, JSPrime and TAJS, which analyse JavaScript code. These
tools have been modified so they can be integrated into the Nibiru framework.
Nibiru is a modular framework that aims to help in the implementation of software
testing. It uses a micro-services architecture, enabling the use of multiple programming
languages in his modules and has the ability to enable the implementation of its modules
on multiple machines. So far the Nibiru has three operating modules and its ready to
start growing with the community, so they can contribute in the construction of new
modules or make small adjustments on the existing testing software to integrate the
Nibiru framework.Num mundo onde cada vez mais o software tem um papel fundamental nas atividades do
dia-a-dia, uma falha pode trazer consequências desagradáveis para os seus utilizadores.
Como exemplo de uma falha grave, temos o caso Apple iCloud security exploit em 2014
[icl14a][icl14b], onde várias fotos de celebridades foram acedidas sem permissão. Para
além de repercussões económicas e comerciais estas falhas levam à perca de confiança
no software por parte dos utilizadores, levando assim à consequente procura de alternativas
ao mesmo, podendo até resultar no abandono do software antigo. Para colmatar
estas falhas, hoje em dia a indústria cada vez aposta mais nos testes de software para
certificar-se que o software contém o mÃnimo de falhas possÃveis antes de sair para o
mercado.
Os testes de software servem para analisar o programa, nomeadamente na obtenção
de bugs. Esta análise pode ser feita sem execução do programa (análise estática) ou
durante a sua execução (análise dinâmica). As ferramentas de análise estática são utilizadas
para verificar se existem potenciais execuções do programa que possam falhar
durante a sua execução devido a eventos inesperados, isto faz com que o programa apresente
um resultado incorreto ou até mesmo bloqueie. Foram estudadas algumas ferramentas
de análise estática, JSFlow, JSPrime e TAJS, que analisam código JavaScript.
Estas ferramentas foram modificadas para serem integradas na framework Nibiru.
O Nibiru é uma framework modular que tem como intuito ajudar na execução de
testes de software. Esta utiliza uma arquitetura de micro-serviços, possibilitando o uso
de múltiplas linguagens de programação nos seus módulos e tem a capacidade de possibilitar
a execução dos seus módulos em várias máquinas. Até ao momento o Nibiru conta
com três módulos operacionais, encontrando-se pronto para crescer com a comunidade
informática, podendo esta contribuir na construção de novos módulos
Progressive Network Deployment, Performance, and Control with Software-defined Networking
The inflexible nature of traditional computer networks has led to tightly-integrated systems that are inherently difficult to manage and secure. New designs move low-level network control into software creating software-defined networks (SDN). Augmenting an existing network with these enhancements can be expensive and complex. This research investigates solutions to these problems. It is hypothesized that an add-on device, or shim could be used to make a traditional switch behave as an OpenFlow SDN switch while maintaining reasonable performance. A design prototype is found to cause approximately 1.5% reduction in throughput for one ow and less than double increase in latency, showing that such a solution may be feasible. It is hypothesized that a new design built on event-loop and reactive programming may yield a controller that is higher-performing and easier to program. The library node-openflow is found to have performance approaching that of professional controllers, however it exhibits higher variability in response rate. The framework rxdn is found to exceed performance of two comparable controllers by at least 33% with statistical significance in latency mode with 16 simulated switches, but is slower than the library node-openflow or professional controllers (e.g., Libfluid, ONOS, and NOX). Collectively, this work enhances the tools available to researchers, enabling experimentation and development toward more sustainable and secure infrastructur
- …