35 research outputs found
Recommended from our members
INFUSE Test Management
This technical report consists of the two papers discussing testing technology. INFUSE: Integration Testing with Crowd Control describes the test management facilities provided by the lNFUSE change management system. lNFUSE partially automates the construction of test harnesses and regression test suites at each level of the integration hierarchy from components available from lower levels. Adequate Testing and Object-Oriented Programming applies the axions of adequate testing to object-oriented programming languages and examines their implications. Contrary to our original expectations, we discover that in the general case classes must be retested in every context of reuse
Automatic Test Data Generation Using Data Flow Information
Bu makalede veri akış kriterini sağlayan Pascal programları için otomatik test verisi üreten bir yazılım programı sunulmuştur. Mevcut programların aksine, bizim programımız Pascal programında tek bir düğümdeki okuma komutuna bağlı kalmamakta, bunun yerine herhangi bir düğümde bulunan okuma komutuyla ilgilenmektedir. Ayrıca test veri üretim sistemlerinde incelenmesi zor olan çevrim ve dizileri de ele almaktadır. Bu metod, literatürde mevcut programlardan daha kapsamlı programlar için test üretimini mümkün kılmaktadır.This paper presents a tool for automatically generating test data for Pascal programs that satisfy the data flow criteria. Unlike existing tools, our tool is not limited to Pascal programs whose program flow graph contains read statements in only one node but rather deals with read statements appearing in any node in the program flow graph. Moreover, our tool handles loops and arrays, these two features are traditionally difficult to handle in test data generation systems. This allows us to generate tests for larger programs than those previously reported in the literature
Automatic Test Data Generation Using Data Flow Information
Bu makalede veri akış kriterini sağlayan Pascal programları için otomatik test verisi üreten bir yazılım programı sunulmuştur. Mevcut programların aksine, bizim programımız Pascal programında tek bir düğümdeki okuma komutuna bağlı kalmamakta, bunun yerine herhangi bir düğümde bulunan okuma komutuyla ilgilenmektedir. Ayrıca test veri üretim sistemlerinde incelenmesi zor olan çevrim ve dizileri de ele almaktadır. Bu metod, literatürde mevcut programlardan daha kapsamlı programlar için test üretimini mümkün kılmaktadır.This paper presents a tool for automatically generating test data for Pascal programs that satisfy the data flow criteria. Unlike existing tools, our tool is not limited to Pascal programs whose program flow graph contains read statements in only one node but rather deals with read statements appearing in any node in the program flow graph. Moreover, our tool handles loops and arrays, these two features are traditionally difficult to handle in test data generation systems. This allows us to generate tests for larger programs than those previously reported in the literature
Recommended from our members
Comparing test sets and criteria in the presence of test hypotheses and fault domains
A number of authors have considered the problem of comparing test sets and criteria. Ideally
test sets are compared using a preorder with the property that test set T1 is at least as strong
as T2 if whenever T2 determines that an implementation p is faulty, T1 will also determine that
p is faulty. This notion can be extended to test criteria. However, it has been noted that very
few test sets and criteria are comparable under such an ordering; instead orderings are based
on weaker properties such as subsumes. This paper explores an alternative approach, in which
comparisons are made in the presence of a test hypothesis or fault domain. This approach allows
strong statements about fault detecting ability to be made and yet for a number of test sets and
criteria to be comparable. It may also drive incremental test generation
The development of a program analysis environment for Ada
A unit level, Ada software module testing system, called Query Utility Environment for Software Testing of Ada (QUEST/Ada), is described. The project calls for the design and development of a prototype system. QUEST/Ada design began with a definition of the overall system structure and a description of component dependencies. The project team was divided into three groups to resolve the preliminary designs of the parser/scanner: the test data generator, and the test coverage analyzer. The Phase 1 report is a working document from which the system documentation will evolve. It provides history, a guide to report sections, a literature review, the definition of the system structure and high level interfaces, descriptions of the prototype scope, the three major components, and the plan for the remainder of the project. The appendices include specifications, statistics, two papers derived from the current research, a preliminary users' manual, and the proposal and work plan for Phase 2
Equality to equals and unequals: a revisit of the equivalence and nonequivalence criteria in object-oriented software testing
published_or_final_versio
Recommended from our members
Improving the Dependability of Machine Learning Applications
As machine learning (ML) applications become prevalent in various aspects of everyday life, their dependability takes on increasing importance. It is challenging to test such applications, however, because they are intended to learn properties of data sets where the correct answers are not already known. Our work is not concerned with testing how well an ML algorithm learns, but rather seeks to ensure that an application using the algorithm implements the specification correctly and fulfills the users' expectations. These are critical to ensuring the application's dependability. This paper presents three approaches to testing these types of applications. In the first, we create a set of limited test cases for which it is, in fact, possible to predict what the correct output should be. In the second approach, we use random testing to generate large data sets according to parameterization based on the application's equivalence classes. Our third approach is based on metamorphic testing, in which properties of the application are exploited to define transformation functions on the input, such that the new output can easily be predicted based on the original output. Here we discuss these approaches, and our findings from testing the dependability of three real-world ML applications
Recommended from our members
End-user testing for the Lyee methodology using the screen transition paradigm and WYSIWYT
End-user specification of Lyee programs is one goal envisioned by the Lyee methodology. But with any software development effort comes the possibility of faults. Thus, providing end users a means to enter their own specifications is not enough; they must also be provided with the means to find faults in their specifications, in a
manner that is appropriate not only for the end user's programming environment but also for his or her background. In this paper, we present an approach to solving this problem that marries two proven technologies for end users. One methodology for enabling end users to program is the screen transition paradigm. One useful visual testing methodology is "What you see is what you test (WYSIWYT)". In this
paper, we show that WYSIWYT test adequacy criteria can be used with the screen transition paradigm, and present a systematic translation from this paradigm to the formal model underlying WYSIWYT.Key words: WYSIWYT, screen transition diagram, end-user software testin