8 research outputs found
Tracing Execution of Software for Design Coverage
Test suites are designed to validate the operation of a system against
requirements. One important aspect of a test suite design is to ensure that
system operation logic is tested completely. A test suite should drive a system
through all abstract states to exercise all possible cases of its operation.
This is a difficult task. Code coverage tools support test suite designers by
providing the information about which parts of source code are covered during
system execution. Unfortunately, code coverage tools produce only source code
coverage information. For a test engineer it is often hard to understand what
the noncovered parts of the source code do and how they relate to requirements.
We propose a generic approach that provides design coverage of the executed
software simplifying the development of new test suites. We demonstrate our
approach on common design abstractions such as statecharts, activity diagrams,
message sequence charts and structure diagrams. We implement the design
coverage using Third Eye tracing and trace analysis framework. Using design
coverage, test suites could be created faster by focussing on untested design
elements.Comment: Short version of this paper to be published in Proceedings of 16th
IEEE International Conference on Automated Software Engineering (ASE 2001).
13 pages, 9 figure
Recommended from our members
Automatic Detection of Defects in Applications without Test Oracles
In application domains that do not have a test oracle, such as machine learning and scientific computing, quality assurance is a challenge because it is difficult or impossible to know in advance what the correct output should be for general input. Previously, metamorphic testing has been shown to be a simple yet effective technique in detecting defects, even without an oracle. In metamorphic testing, the application's ``metamorphic properties'' are used to modify existing test case input to produce new test cases in such a manner that, when given the new input, the new output can easily be computed based on the original output. If the new output is not as expected, then a defect must exist. In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, and errors can occur in comparing the outputs when they are very complex. In this paper, we present a tool called Amsterdam that automates metamorphic testing by allowing the tester to easily set up and conduct metamorphic tests with little manual intervention, merely by specifying the properties to check, configuring the framework, and running the software. Additionally, we describe an approach called Heuristic Metamorphic Testing, which addresses issues related to false positives and non-determinism, and we present the results of new empirical studies that demonstrate the effectiveness of metamorphic testing techniques at detecting defects in real-world programs without test oracles
Log-based software monitoring: a systematic mapping study
Modern software development and operations rely on monitoring to understand
how systems behave in production. The data provided by application logs and
runtime environment are essential to detect and diagnose undesired behavior and
improve system reliability. However, despite the rich ecosystem around
industry-ready log solutions, monitoring complex systems and getting insights
from log data remains a challenge.
Researchers and practitioners have been actively working to address several
challenges related to logs, e.g., how to effectively provide better tooling
support for logging decisions to developers, how to effectively process and
store log data, and how to extract insights from log data. A holistic view of
the research effort on logging practices and automated log analysis is key to
provide directions and disseminate the state-of-the-art for technology
transfer.
In this paper, we study 108 papers (72 research track papers, 24 journals,
and 12 industry track papers) from different communities (e.g., machine
learning, software engineering, and systems) and structure the research field
in light of the life-cycle of log data.
Our analysis shows that (1) logging is challenging not only in open-source
projects but also in industry, (2) machine learning is a promising approach to
enable a contextual analysis of source code for log recommendation but further
investigation is required to assess the usability of those tools in practice,
(3) few studies approached efficient persistence of log data, and (4) there are
open opportunities to analyze application logs and to evaluate state-of-the-art
log analysis techniques in a DevOps context
Policy Driven Software Monitoring
Software monitoring and logging is one of the most important tools a software
engineer has when faced with the challenge of auditing or analysing a software
system. However, the difficulty in effectively monitoring a system, managing its
logs and cross referencing them with source code makes software re-engineering a
rigorous and complex task. This thesis aims to address this issue by providing
a framework that enables pattern matching between a software log and an event
pattern expression that is based on a monitoring policy. The framework consists of
parsers and annotators that facilitates transformation of a monitoring policy into
a Petri Net as well as source code annotation for gathering data through logged
events. It further expands upon this work by proposing an adaptive logging framework
that will greatly improve the quality of log management by autonomically
adjusting the amount of information logged based on the application’s operational
environment. Finally, a prototype system of the policy driven monitoring framework
is implemented and tested with applications of different scales as a proof of
concept for the proposed framework
Requirement-based Root Cause Analysis Using Log Data
Root Cause Analysis for software systems is a challenging diagnostic task due to complexity emanating from the interactions between system components. Furthermore, the sheer size of the logged data makes it often difficult for human operators and administrators to perform problem diagnosis and root cause analysis. The diagnostic task is further complicated by the lack of models that could be used to support the diagnostic process. Traditionally, this diagnostic task is conducted by human experts who create mental models of systems, in order to generate hypotheses and conduct the analysis even in the presence of incomplete logged data. A challenge in this area is to provide the necessary concepts, tools, and techniques for the operators to focus their attention to specific parts of the logged data and ultimately to automate the diagnostic process.
The work described in this thesis aims at proposing a framework that includes techniques, formalisms, and algorithms aimed at automating the process of root cause analysis. In particular, this work uses annotated requirement goal models to represent the monitored systems' requirements and runtime behavior. The goal models are used in combination with log data to generate a ranked set of diagnostics that represent the combination of tasks that failed leading to the observed failure. In addition, the framework uses a combination of word-based and topic-based information retrieval techniques to reduce the size of log data by filtering out a subset of log data to facilitate the diagnostic process. The process of log data filtering and reduction is based on goal model annotations and generates a sequence of logical literals that represent the possible systems' observations. A second level of investigation consists of looking for evidence for any malicious (i.e., intentionally caused by a third party) activity leading to task failures. This analysis uses annotated anti-goal models that denote possible actions that can be taken by an external user to threaten a given system task. The framework uses a novel probabilistic approach based on Markov Logic Networks. Our experiments show that our approach improves over existing proposals by handling uncertainty in observations, using natively generated log data, and by providing ranked diagnoses. The proposed framework has been evaluated using a test environment based on commercial off-the-shelf software components, publicly available Java Based ATM machine, and the large publicly available dataset (DARPA 2000)