4 research outputs found
Assessing Black-box Test Case Generation Techniques for Microservices
Testing of microservices architectures (MSA) – today a popular software architectural style - demands for automation in its several tasks, like tests generation, prioritization and execution. Automated black-box generation of test cases for MSA currently borrows techniques and tools from the testing of RESTful Web Services.
This paper: i) proposes the uTest stateless pairwise combinatorial technique (and its automation tool) for test cases generation for functional and robustness microservices testing, and ii) experimentally compares - with three open-source MSA used as subjects - four state-of-the-art black-box tools conceived for Web Services, adopting evolutionary-, dependencies- and mutation-based generation techniques, and the pro- posed uTest combinatorial tool.
The comparison shows little differences in coverage values; uTest pairwise testing achieves better average failure rate with a considerably lower number of tests. Web Services tools do not perform for MSA as well as a tester might expect, highlighting the need for MSA-specific techniques
An Empirical Evaluation of the Energy and Performance Overhead of Monitoring Tools on Docker-Based Systems
Context. Energy efficiency is gaining importance in the design of software systems, but is still marginally addressed in the area of microservice-based systems. Energy-related aspects often get neglected in favor of other software quality attributes, such as performance, service composition, maintainability, and security. Goal. The aim of this study is to identify, synthesize and empirically evaluate the energy and performance overhead of monitoring tools employed in the microservices and DevOps context. Method. We selected four representative monitoring tools in the microservices and DevOps context. These were evaluated via a controlled experiment on an open-source Docker-based microservice benchmark system. Results. The results highlight: i) the specific frequency and workload conditions under which energy consumption and performance metrics are impacted by the tools; ii) the differences between the tools; iii) the relation between energy and performance overhead.</p
Reasoning-Based Software Testing
With software systems becoming increasingly pervasive and autonomous, our
ability to test for their quality is severely challenged. Many systems are
called to operate in uncertain and highly-changing environment, not rarely
required to make intelligent decisions by themselves. This easily results in an
intractable state space to explore at testing time. The state-of-the-art
techniques try to keep the pace, e.g., by augmenting the tester's intuition
with some form of (explicit or implicit) learning from observations to search
this space efficiently. For instance, they exploit historical data to drive the
search (e.g., ML-driven testing) or the tests execution data itself (e.g.,
adaptive or search-based testing). Despite the indubitable advances, the need
for smartening the search in such a huge space keeps to be pressing.
We introduce Reasoning-Based Software Testing (RBST), a new way of thinking
at the testing problem as a causal reasoning task. Compared to mere
intuition-based or state-of-the-art learning-based strategies, we claim that
causal reasoning more naturally emulates the process that a human would do to
''smartly" search the space. RBST aims to mimic and amplify, with the power of
computation, this ability. The conceptual leap can pave the ground to a new
trend of techniques, which can be variously instantiated from the proposed
framework, by exploiting the numerous tools for causal discovery and inference.
Preliminary results reported in this paper are promising