503,259 research outputs found

    The development of a program analysis environment for Ada

    Get PDF
    A unit level, Ada software module testing system, called Query Utility Environment for Software Testing of Ada (QUEST/Ada), is described. The project calls for the design and development of a prototype system. QUEST/Ada design began with a definition of the overall system structure and a description of component dependencies. The project team was divided into three groups to resolve the preliminary designs of the parser/scanner: the test data generator, and the test coverage analyzer. The Phase 1 report is a working document from which the system documentation will evolve. It provides history, a guide to report sections, a literature review, the definition of the system structure and high level interfaces, descriptions of the prototype scope, the three major components, and the plan for the remainder of the project. The appendices include specifications, statistics, two papers derived from the current research, a preliminary users' manual, and the proposal and work plan for Phase 2

    Using Rule-based Structure to Evaluate Rule-based System Testing Completeness: A Case Study of Loci and Quick Test

    Get PDF
    Rule-based systems are tested by developing a set of inputs which will produce already known outputs. The problem with this form of testing is that the system code is not considered when generating test cases. This makes software testing completeness difficult to measure. This is important because all the computational models are constructed within the code. Therefore, to show the models of the system are tested, it must be shown that the code is tested. Chem uses the Loci rule-based application framework to build computational fluid dynamics models. These models are tested using the Quick Test suite. The data flow structure built by Loci, along with Quick Test, provided a case study for the research. The test suite was compared against three levels of coverage. The measures indicated that the lowest level of coverage was not achieved. This shows us that structural coverage measures can be utilized to measure rule-based system testing completeness

    RECORDING AND EVALUATING INDUSTRY BLACK BOX COVERAGE MEASURES

    Get PDF
    Software testing is an indispensable part of software development process. The main goal of a test engineer is to choose a subset of test cases which reveal most of the faults in a program. Coverage measure could be used to evaluate how good the selected subset of test cases is. Test case coverage for a program was traditionally calculated from the white box (internal structure) perspective. However, test cases are usually constructed to test particular functionality of a program, therefore having a technique to calculate coverage from the functionality (black box) perspective will be beneficial for a test engineer. In this thesis we discuss a methodology of recording and evaluating the black box coverage for a program. We also implement a black box coverage calculation tool and perform experiments with it using three subject programs. We then collect and analyze experimental data and show the relationship between the two types of coverage and the fault-finding ability of a test suite

    Swarm testing

    Get PDF
    ManuscriptSwarm testing is a novel and inexpensive way to improve the diversity of test cases generated during random testing. Increased diversity leads to improved coverage and fault detection. In swarm testing, the usual practice of potentially including all features in every test case is abandoned. Rather, a large "swarm" of randomly generated configurations, each of which omits some features, is used, with configurations receiving equal resources. We have identified two mechanisms by which feature omission leads to better exploration of a system's state space. First, some features actively prevent the system from executing interesting behaviors; e.g., "pop" calls may prevent a stack data structure from executing a bug in its overflow detection logic. Second, even when there is no active suppression of behaviors, test features compete for space in each test, limiting the depth to which logic driven by features can be explored. Experimental results show that swarm testing increases coverage and can improve fault detection dramatically; for example, in a week of testing it found 42% more distinct ways to crash a collection of C compilers than did the heavily hand-tuned default configuration of a random tester

    Forest disturbance and recovery: A general review in the context of spaceborne remote sensing of impacts on aboveground biomass and canopy structure

    Get PDF
    Abrupt forest disturbances generating gaps \u3e0.001 km2 impact roughly 0.4–0.7 million km2a−1. Fire, windstorms, logging, and shifting cultivation are dominant disturbances; minor contributors are land conversion, flooding, landslides, and avalanches. All can have substantial impacts on canopy biomass and structure. Quantifying disturbance location, extent, severity, and the fate of disturbed biomass will improve carbon budget estimates and lead to better initialization, parameterization, and/or testing of forest carbon cycle models. Spaceborne remote sensing maps large-scale forest disturbance occurrence, location, and extent, particularly with moderate- and fine-scale resolution passive optical/near-infrared (NIR) instruments. High-resolution remote sensing (e.g., ∼1 m passive optical/NIR, or small footprint lidar) can map crown geometry and gaps, but has rarely been systematically applied to study small-scale disturbance and natural mortality gap dynamics over large regions. Reducing uncertainty in disturbance and recovery impacts on global forest carbon balance requires quantification of (1) predisturbance forest biomass; (2) disturbance impact on standing biomass and its fate; and (3) rate of biomass accumulation during recovery. Active remote sensing data (e.g., lidar, radar) are more directly indicative of canopy biomass and many structural properties than passive instrument data; a new generation of instruments designed to generate global coverage/sampling of canopy biomass and structure can improve our ability to quantify the carbon balance of Earth\u27s forests. Generating a high-quality quantitative assessment of disturbance impacts on canopy biomass and structure with spaceborne remote sensing requires comprehensive, well designed, and well coordinated field programs collecting high-quality ground-based data and linkages to dynamical models that can use this information

    A cloud classification scheme applied to the breakup region of marine stratocumulus

    Get PDF
    A major goal of the marine stratocumulus (MSc) segment of FIRE is to describe and explain the temporal and spatial variability in fractional cloud cover. The challenge from a theoretical standpoint is to correctly represent the mechanisms leading to the transitions between solid stratus, stratocumulus and trade wind cumulus. The development and testing of models accounting for fractional cloudiness require an observational data base that will come primarily from satellites. This, of course, is one of the missions of the ISCCP. There are a number of satellite cloud analysis programs that are being undertaken as part of FIRE. One that has already produced data from the FIRE MSc experiment is the spatial coherence method (COAKLEY and Baldwin, 1984). This method produces information on fractional cloud coverage and cloud heights. It may be possible, however, to extract more information on cloud structure from satellite data that might be of use in describing the transitions in the marine stratocumulus cloud deck. Potential applications are explored of a cloud analysis scheme relying on more detailed analysis of visible and infrared cloud radiance statistics. For this preliminary study, data is examined from three days during the 1987 FIRE MSc field work. These case studies provide a basis for comparison and evaluation of the technique

    WEB Based Applications Testing: Analytical Approach towards Model Based Testing and Fuzz Testing

    Get PDF
    Web-based applications are complex in the structure which results in facing an immense amount of exploiting attacks, so testing should be done in a proactive way in order to identify threats in the applications. The intruder can explore these security loopholes and may exploit the application which results in economical lose, so testing the application becomes a supreme phase of development.  The main objective of testing is to secure the contents of applications either through static or automatic approach. The software houses usually follow fuzz based testing in which flaws can be explored by randomly inputting invalid data while on the other hand model-based testing is the automated approach which tests the applications from all perspectives on the basis of an abstract model of the application. The main theme of this research is to study the difference between fuzz based testing and MBT in terms of test coverage, performance, cost and time. This research work guides the web application practitioner in the selection of suitable methodology for different testing scenarios which save efforts imparted on testing and develop better and breaches free product.
    • …
    corecore