565 research outputs found
Recommended from our members
Fault-based regression testing in a reactive environment
Regression testing is the process of retesting software after modification. Regression testing is a major factor contributing to the high cost of software maintenance. To control this cost, regression testing must be accomplished efficiently through effective reuse of test cases and judicious generation of new test cases.Fault-based testing focuses on the detection of particular classes of faults. RELAY is a fault-based testing technique that guarantees the detection of errors caused by any fault in a chosen fault classification. RELAY can be used as a regression testing technique to generate the test cases required to demonstrate that a modification is properly made. In addition, the information related to a test case chosen to detect a potential fault guides in choosing previously-selected test cases that should be reused, for a given modification.This paper presents the concepts behind RELAY and discusses how RELAY could be used as a regression testing technique. It also describes a testing environment that supports reactive regression testing as well as testing throughout the development lifecycle, which is based on integrating the RELAY model with other testing techniques
Recommended from our members
Testing based on the RELAY model of error detection
RELAY, a model for error detection, defines revealing conditions that guarantee that a fault originates an error during execution and that the error transfers through computations and data flow until it is revealed. This model of error detection provides a fault-based criterion for test data selection. The model is applied by choosing a fault classification, instantiating the conditions for the classes of faults, and applying them to the program being tested. Such an application guarantees the detection of errors caused by any fault of the chosen classes. As a formal mode of error detection, RELAY provides the basis for an automated testing tool. This paper presents the concepts behind RELAY, describes why it is better than other fault-based testing criteria, and discusses how RELAY could be used as the foundation for a testing system
Recommended from our members
An analysis of test data selection criteria using the RELAY model of fault detection
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. In this paper, we analyze three test data selection criteria that attempt to detect faults in six fault classes. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules; each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules; although a criterion may cause a subexpression to take on an erroneous value, there is no effort made to guarantee that the intermediate values cause observable, erroneous behavior. This paper shows how the RELAY model overcomes these weaknesses
Recommended from our members
A formal evaluation of data flow path selection criteria
A number of path selection criteria have been proposed throughout the years. Unfortunately, little work has been done on comparing these criteria. To determine what would be an effective path selection criterion for revealing errors in programs, we have undertaken an evaluation of these criteria. This paper reports on the results of our evaluation of path selection criteria based on data flow relationships. We show how these criteria relate to each other, thereby demonstrating some of their strengths and weaknesses. In addition, we suggest minor changes to some criteria that improve their performance. We conclude with a discussion of the major limitations of these criteria and directions for future research
Recommended from our members
Prototyping a process-centered environment
This paper describes an experimental system developed and used as a vehicle for prototyping the Arcadia-1 software development environment. Prototyping is viewed as a knowledge acquisition process and is used to reduce risks in software development by gaining rapid feedback about the suitability of a production system before the system is completed. Prototyping a software development environment is particularly important due to the lack of experience with them. There is an acute need to acquire knowledge about user interaction requirements for software environments. These needs are especially important for the Arcadia project, as it is one of the first attempts to construct a process-centered environment. Our prototyping effort addresses questions about effective interaction with a process-centered environment by simulating how Arcadia-1 would interact with users in a representative range of usage scenarios. We built a prototyping system, called PRODUCER, and used it to generate a variety of prototypes simulating user interactions with Arcadia-1 process programs.Experience with PRODUCER indicates that our approach is effective at risk reduction. The prototypes greatly improved communication with our customer. They confirmed some of our design decisions but also redirected our research efforts as a result of unexpected insight. We also found that prototyping usage scenarios provides conceptual guides and design information for process programmers. Most of the benefits of our prototyping effort derive from developing and interacting with usage scenarios, so our approach is generalizable to other prototyping systems. This paper reports on our prototyping approach and our experience in prototyping a process-centered environment
Cancer Biology Data Curation at the Mouse Tumor Biology Database (MTB)
Many advances in the field of cancer biology have been made using mouse models of human cancer. The Mouse Tumor Biology (MTB, "http://tumor.informatics.jax.org":http://tumor.informatics.jax.org) database provides web-based access to data on spontaneous and induced tumors from genetically defined mice (inbred, hybrid, mutant, and genetically engineered strains of mice). These data include standardized tumor names and classifications, pathology reports and images, mouse genetics, genomic and cytogenetic changes occurring in the tumor, strain names, tumor frequency and latency, and literature citations.

Although primary source for the data represented in MTB is peer-reviewed scientific literature an increasing amount of data is derived from disparate sources. MTB includes annotated histopathology images and cytogenetic assay images for mouse tumors where these data are available from The Jackson Laboratory’s mouse colonies and from outside contributors. MTB encourages direct submission of mouse tumor data and images from the cancer research community and provides investigators with a web-accessible tool for image submission and annotation. 

Integrated searches of the data in MTB are facilitated by the use of several controlled vocabularies and by adherence to standard nomenclature. MTB also provides links to other related online resources such as the Mouse Genome Database, Mouse Phenome Database, the Biology of the Mammary Gland Web Site, Festing's Listing of Inbred Strains of Mice, the JAX® Mice Web Site, and the Mouse Models of Human Cancers Consortium's Mouse Repository. 

MTB provides access to data on mouse models of cancer via the internet and has been designed to facilitate the selection of experimental models for cancer research, the evaluation of mouse genetic models of human cancer, the review of patterns of mutations in specific cancers, and the identification of genes that are commonly mutated across a spectrum of cancers.

MTB is supported by NCI grant CA089713
Creating Teaching Opportunities for STEM Future Faculty Development
Graduate school is an important time for future faculty to develop teaching skills, but teaching opportunities are limited. Discipline-related course work and research do not provide the pedagogy, strategies, and skills to effectively teach and compete for higher education jobs. As future faculty, graduate students will influence the future of science, technology, engineering, and mathematics (STEM) education through their teaching. The purpose of this case study was to examine future faculty’s (graduate students’) perceived teaching development during a semester-long STEM teaching development course. Findings included STEM future faculty’s teaching confidence and skill development in instructional design, preparation, and facilitation; greater development in skill awareness than student awareness and self-awareness; and a focus on knowledge-centered learning environments for future classroom teaching experiences
The i-School movement
No Abstract.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/57317/1/14504301131_ftp.pd
- …