17,055 research outputs found

    Term testing: a case study

    Get PDF
    Purpose and background: The litigation world has many examples of cases where the volume of Electronically Stored Information (ESI) demands that litigators use automatic means to assist with document identification, classification, and filtering. This case study describes one such process for one case. This case study is not a comprehensive analysis of the entire case, only the Term Testing portion. Term Testing is an analytical practice of refining match terms by running in-depth analysis on a sampling of documents. The goal of term testing is to reduce the number of false negatives (relevant / privilege document with no match, also known as “misdetections”) and false positives (documents matched but not actually relevant / privilege) as much as possible. The case was an employment discrimination suit, against a government agency. The collection effort turned up common sources of ESI: hard drives, network shares, CDs and DVDs, and routine e-mail storage and backups. Initial collection, interviews, and reviews had revealed that a few key documents, such as old versions of policies, had not been retained or collected. Then an unexpected source of information was unearthed: one network administrator had been running an unauthorized “just-in-case” tracer on the email system, outside the agency’s document retention policies, which created dozens of tapes full of millions of encrypted compressed emails, covering more years than the agency’s routine email backups. The agency decided to process and review these tracer emails for the missing key documents, even though the overall volume of relevant documents would rise exponentially. The agency had clear motivation to reduce the volume of documents flowing into relevancy and privilege reviews, but had concerns about the defensibility of using an automated process to determine which documents would never be reviewed. The case litigators and Subject Matter Experts (SMEs) decided to use a process of Term Testing to ensure that automated filtering was both defensible and as accurate as possible

    Dynamics of Playa Lales in the Texas High Plains, Progress Report 1 Dec. 1973 - 31 Jan. 1974

    Get PDF
    There are no author-identified significant results in this report

    Dynamics of Playa Lakes in the Texas High Plains

    Get PDF
    Use of ERTS-1 imagery for census of lake basins in Texas high plain

    Revising Z: part I - logic and semantics

    Get PDF
    This is the first of two related papers. We introduce a simple specification logic ZC comprising a logic and a semantics (in ZF set theory) within which the logic is sound. We then provide an interpretation for (a rational reconstruction of) the specification language Z within ZC. As a result we obtain a sound logic for Z, including a basic schema calculus

    Revising Z: part II - logical development

    Get PDF
    This is the second of two related papers. In "Revising Z: Part I - logic and semantics" (this journal) we introduced a simple specification logic ZC comprising a logic and a semantics (in ZF set theory). We then provided an interpretation for (a rational reconstruction of) the specification language Z within ZC. As a result we obtained a sound logic for Z, including the basic schema calculus. In this paper we extend the basic framework with more sophisticated features (including schema operations) and we mount a critique of a number of concepts used in Z. We further demonstrate that the complications and confusions which these concepts introduce can be avoided without compromising expressibility

    Results on formal stepwise design in Z

    Get PDF
    Stepwise design involves the process of deriving a concrete model of a software system from a given abstract one. This process is sometimes known as refinement. There are numerous refinement theories proposed in the literature, each of which stipulates the nature of the relationship between an abstract specification and its concrete counterpart. This paper considers six refinement theories in Z that have been proposed by various people over the years. However, no systematic investigation of these theories, or results on the relationships between them, have been presented or published before. This paper shows that these theories fall into two important categories and proves that the theories in each category are equivalent

    An analysis of total correctness refinement models for partial relation semantics I

    Get PDF
    This is the first of a series of papers devoted to the thorough investigation of (total correctness) refinement based on an underlying partial relational model. In this paper we restrict attention to operation refinement. We explore four theories of refinement based on an underlying partial relation model for specifications, and we show that they are all equivalent. This, in particular, sheds some light on the relational completion operator (lifted-totalisation) due to Wookcock which underlines data refinement in, for example, the specification language Z. It further leads to two simple alternative models which are also equivalent to the others

    Multispectral scanner data processing over Sam Houston National Forest

    Get PDF
    The Edit 9 forest scene, a computer processing technique, and its capability to map timber types in the Sam Houston National Forest, are evaluated. Special efforts were made to evaluate existing computer processing techniques in mapping timber types using ERTS-1 and aircraft data, and to provide an opportunity to open up new research and development areas in forestry data

    Discriminating coastal rangeland production and improvements with computer aided techniques

    Get PDF
    The feasibility and utility of using satellite data and computer-aided remote sensing analysis techniques to conduct range inventories were tested. This pilot study was focused over a 250,000 acre site in Galveston and Brazoria Counties along the Texas Gulf Coast. Rectified enlarged aircraft color infrared photographs of this site were used as the ground truth base. The different land categories were identified, delineated, and measured. Multispectral scanner (MSS) bulk data from LANDSAT-1 was received and analyzed with the Image 100 pattern recognition system. Features of interest were delineated on the image console giving the number of picture elements classified; the picture elements were converted to acreages and the accuracy of the technique was evaluated by comparison with data base results for three test sites. The accuracies for computer aided classification of coastal marshes ranged from 89% to 96%
    corecore