12,339 research outputs found

    Estimating reliability by application of matrix representation

    Get PDF
    Technique based upon matrix representation and matrix collapsing calculates the probability of successfully completing manned missions and of returning the spacecrew safely to earth. This technique provides analytic expressions for each subsystem, making it possible to relate changes in subsystem reliability directly to mission success and crew safety

    Registration verification of SEA/AR fields

    Get PDF
    A method of field registration verification for 20 SEA/AR sites for the 1979 crop year is evaluated. Field delineations for the sites were entered into the data base, and their registration verified using single channel gray scale computer printout maps of LANDSAT data taken over the site

    Experiments on Visual Acuity and the Visibility of Markings on the Ground in Long-duration Earth-Orbital Space Flight

    Get PDF
    Visual acuity and visibility of markings on ground in long duration earth orbital space fligh

    Towards practical classical processing for the surface code: timing analysis

    Full text link
    Topological quantum error correction codes have high thresholds and are well suited to physical implementation. The minimum weight perfect matching algorithm can be used to efficiently handle errors in such codes. We perform a timing analysis of our current implementation of the minimum weight perfect matching algorithm. Our implementation performs the classical processing associated with an nxn lattice of qubits realizing a square surface code storing a single logical qubit of information in a fault-tolerant manner. We empirically demonstrate that our implementation requires only O(n^2) average time per round of error correction for code distances ranging from 4 to 512 and a range of depolarizing error rates. We also describe tests we have performed to verify that it always obtains a true minimum weight perfect matching.Comment: 13 pages, 13 figures, version accepted for publicatio

    SeaWiFS technical report series. Volume 5: Ocean optics protocols for SeaWiFS validation

    Get PDF
    Protocols are presented for measuring optical properties, and other environmental variables, to validate the radiometric performance of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), and to develop and validate bio-optical algorithms for use with SeaWiFS data. The protocols are intended to establish foundations for a measurement strategy to verify the challenging SeaWiFS accuracy goals of 5 percent in water-leaving radiances and 35 percent in chlorophyll alpha concentration. The protocols first specify the variables which must be measured, and briefly review rationale. Subsequent chapters cover detailed protocols for instrument performance specifications, characterizing and calibration instruments, methods of making measurements in the field, and methods of data analysis. These protocols were developed at a workshop sponsored by the SeaWiFS Project Office (SPO) and held at the Naval Postgraduate School in Monterey, California (9-12 April, 1991). This report is the proceedings of that workshop, as interpreted and expanded by the authors and reviewed by workshop participants and other members of the bio-optical research community. The protocols are a first prescription to approach unprecedented measurement accuracies implied by the SeaWiFS goals, and research and development are needed to improve the state-of-the-art in specific areas. The protocols should be periodically revised to reflect technical advances during the SeaWiFS Project cycle

    Towards practical classical processing for the surface code

    Full text link
    The surface code is unarguably the leading quantum error correction code for 2-D nearest neighbor architectures, featuring a high threshold error rate of approximately 1%, low overhead implementations of the entire Clifford group, and flexible, arbitrarily long-range logical gates. These highly desirable features come at the cost of significant classical processing complexity. We show how to perform the processing associated with an nxn lattice of qubits, each being manipulated in a realistic, fault-tolerant manner, in O(n^2) average time per round of error correction. We also describe how to parallelize the algorithm to achieve O(1) average processing per round, using only constant computing resources per unit area and local communication. Both of these complexities are optimal.Comment: 5 pages, 6 figures, published version with some additional tex
    corecore