5,097 research outputs found
eCrash: a framework for performing evolutionary testing on third-party Java components
The focus of this paper is on presenting a tool for generating test data by employing evolutionary search techniques, with basis on the information provided by the structural analysis and interpretation of the Java bytecode of third-party Java components, and on the dynamic execution of the instrumented test object.
The main objective of this approach is that of evolving a set of test cases that yields full structural code coverage of the test object. Such a test set can be used for effectively performing the testing activity, providing confidence in the quality and robustness of the test object.
The rationale of working at the bytecode level is that even when the source code is unavailable structural testing requirements can still be derived, and used to assess the quality of a test set and to guide the evolutionary search towards reaching specific test goals
Evolutionary unit-testing of third party object-oriented Java software
Dissertação apresentada à Universidad de Extremadura para obtenção do Diploma de Estudios Avanzados, orientada por Francisco Fernandéz de Vega.Evolutionary Testing is an emerging methodology for automatically generating
high quality test data. The focus of this work is on presenting a searchbased
approach for the the unit-testing of third-party object-oriented Java
software.
Test cases are represented and evolved using the Strongly Typed Genetic
Programming paradigm, which effectively mimics the inheritance and polymorphic
properties of object-oriented programs and enables the maintenance
of call dependences when applying tree construction, mutation or crossover.
Our strategy for evaluating the quality of test cases includes instrumenting
the test object for basic block analysis and structural event dispatch,
and executing the instrumented test object using the generated test cases as
“inputs” – in order to collect trace information and derive coverage metrics.
Static analysis, instrumentation and execution tracing is performed solely
with basis on the high-level information extracted from the Java Bytecode of
the test object. Given that the test object’s source code is often unavailable,
working at the Bytecode level allows broadening the scope of applicability of
our approach; it can be used, for instance, to perform structural testing on
third-party Java components.
Test objects are represented internally by weighted control-flow graphs;
strategies are introduced for favouring test cases that exercise problematic
structures and difficult control-flow paths, which involve dynamic weight
reevaluation. The aim is that of efficiently guiding the search process towards
achieving full structural coverage – which often involves promoting
the definition of complex and intricate test cases that define elaborate state
scenarios.
The work performed so far allowed us to develop the prototype of a test
case generation tool, called eCrash. Experiments have been carried and quality
solutions have been found, proving the pertinence of our methodology and
encouraging further studies
Using dynamic analysis of Java bytecode for evolutionary object-oriented unit testing
The focus of this paper is on presenting a methodology for
generating and optimizing test data by employing evolutionary search
techniques, with basis on the information provided by the analysis and
interpretation of Java bytecode and on the dynamic execution of the
instrumented test object.
The main reason to work at the bytecode level is that even when the source
code is unavailable, structural testing requirements can still be derived and
used to assess the quality of a given test set and to guide the evolutionary
search towards reaching specific test goals.
Java bytecode retains enough high-level information about the original source
code for an underlying model for program representation to be built. The
observations required to select or generate test data are obtained by
employing dynamic analysis techniques – i.e. by instrumenting, tracing and
analysing Java bytecode
Recommended from our members
Detecting, Isolating and Enforcing Dependencies Between and Within Test Cases
Testing stateful applications is challenging, as it can be difficult to identify hidden dependencies on program state. These dependencies may manifest between several test cases, or simply within a single test case. When it's left to developers to document, understand, and respond to these dependencies, a mistake can result in unexpected and invalid test results. Although current testing infrastructure does not currently leverage state dependency information, we argue that it could, and that by doing so testing can be improved. Our results thus far show that by recovering dependencies between test cases and modifying the popular testing framework, JUnit, to utilize this information, we can optimize the testing process, reducing time needed to run tests by 62% on average. Our ongoing work is to apply similar analyses to improve existing state of the art test suite prioritization techniques and state of the art test case generation techniques. This work is advised by Professor Gail Kaiser
Automated Unit Testing of Evolving Software
As software programs evolve, developers need to ensure that new changes do
not affect the originally intended functionality of the program. To increase their
confidence, developers commonly write unit tests along with the program, and
execute them after a change is made. However, manually writing these unit-tests
is difficult and time-consuming, and as their number increases, so does the cost
of executing and maintaining them.
Automated test generation techniques have been proposed in the literature
to assist developers in the endeavour of writing these tests. However, it remains
an open question how well these tools can help with fault finding in practice,
and maintaining these automatically generated tests may require extra effort
compared to human written ones.
This thesis evaluates the effectiveness of a number of existing automatic
unit test generation techniques at detecting real faults, and explores how these
techniques can be improved. In particular, we present a novel multi-objective
search-based approach for generating tests that reveal changes across two versions
of a program. We then investigate whether these tests can be used such that no
maintenance effort is necessary.
Our results show that overall, state-of-the-art test generation tools can indeed
be effective at detecting real faults: collectively, the tools revealed more than half
of the bugs we studied. We also show that our proposed alternative technique
that is better suited to the problem of revealing changes, can detect more faults,
and does so more frequently. However, we also find that for a majority of
object-oriented programs, even a random search can achieve good results. Finally, we
show that such change-revealing tests can be generated on demand in practice,
without requiring them to be maintained over time
Assessing Cognitive Complexity in Java-Based Object-Oriented Systems: Metrics and Tool Support
Software cognitive complexity refers to how demanding the mental process of performing tasks such as coding, testing, debugging, or modifying source code is. Achieving low levels of cognitive complexity is crucial for ensuring high levels of software maintainability, which is one of the most rewardful software quality attributes. Therefore, in order to control and ensure software maintainability, it is first necessary to accurately quantify software cognitive complexity. In this line, this paper presents a software metric to assess cognitive complexity in Object-Oriented (OO) systems, and particularly those developed in the Java language, which is very popular among OO programming languages. The proposed metric is based on a characterization of basic control structures present in Java systems. Several algorithms to compute the metric and their materialization in the Eclipse IDE are also introduced. Finally, a theoretical validation of the metric against a framework specially designed to validate software complexity metrics is presented, and the applicability of the tool is shown by illustrating the metric in the context of ten real world Java projects and relevant metrics from the well-known Chidamber-Kemerer metric suite
mCrash: a framework for the evaluation of mobile devices' trustworthiness properties
Dissertação apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra para obtenção do grau de Mestre em "Informatics and Systems", orientada por Mário Alberto Zenha-Rela. Apenas está disponível um resumo da dissertação.Mobile devices, such as Smartphones, are being used virtually by every modern
individual. Such devices are expected to work continuously and
awlessly
for years, despite having been designed without criticality requirements.
However, the requirements of mobility, digital identi cation and authentication
lead to an increasing dependence of societies on the correct behaviour
of these \proxies for the individual".
The Windows Mobile 5.0 release has delivered a new set of internal state
monitoring services, centralized into the State and Noti cations Broker. This
API was designed to be used by context-aware applications, providing a comprehensive
monitoring of the internal state and resources of mobile devices.
We propose using this service to increase the dependability of mobile applications
by showing, through a series of fault-injection campaigns, that this
novel API is very e ective for error propagation pro ling and monitoring
Architectures v/s Microservices
As it evolves, technology has always found a better way to build applications and improve their efficiency. New techniques have been learned by adapting old technologies and observing how markets shift towards new trends to satisfy their customers and shareholders. By taking Service Oriented Architecture (SOA) and evolving techniques in cloud computing, Web 2.0 emerged with a new pattern for designing an architecture evolved from the conventional monolithic approach known as microservice architecture (MSA). This new pattern develops an application by breaking the substantial use into a group of smaller applications, which run on their processes and communicate through an API. This style of application development is suitable for many infrastructures, especially within a cloud environment. These new patterns advanced to satisfy the concepts of domain-driven, continuous integration, and automated infrastructure more effectively. MSA has created a way to develop and deploy small scalable applications, which allows enterprise-level applications to dynamically adjust to their resources.
This paper discusses what that architecture is, what makes it necessary, what factors affect best-fit architecture choices, how microservices-based architecture has evolved, and what factors are driving service-based architectures, in addition to comparing SOA and microservice. By analyzing a few popular architectures, the factors which help in choosing the architecture design will be compared with the MSA to show the benefits and challenges that may arise as an enterprise shifts their developing architecture to microservices
- …