465 research outputs found

    Book Review

    Get PDF

    Parallelized Inference for Gravitational-Wave Astronomy

    Full text link
    Bayesian inference is the workhorse of gravitational-wave astronomy, for example, determining the mass and spins of merging black holes, revealing the neutron star equation of state, and unveiling the population properties of compact binaries. The science enabled by these inferences comes with a computational cost that can limit the questions we are able to answer. This cost is expected to grow. As detectors improve, the detection rate will go up, allowing less time to analyze each event. Improvement in low-frequency sensitivity will yield longer signals, increasing the number of computations per event. The growing number of entries in the transient catalog will drive up the cost of population studies. While Bayesian inference calculations are not entirely parallelizable, key components are embarrassingly parallel: calculating the gravitational waveform and evaluating the likelihood function. Graphical processor units (GPUs) are adept at such parallel calculations. We report on progress porting gravitational-wave inference calculations to GPUs. Using a single code - which takes advantage of GPU architecture if it is available - we compare computation times using modern GPUs (NVIDIA P100) and CPUs (Intel Gold 6140). We demonstrate speed-ups of ∼50×\sim 50 \times for compact binary coalescence gravitational waveform generation and likelihood evaluation and more than 100×100\times for population inference within the lifetime of current detectors. Further improvement is likely with continued development. Our python-based code is publicly available and can be used without familiarity with the parallel computing platform, CUDA.Comment: 5 pages, 4 figures, submitted to PRD, code can be found at https://github.com/ColmTalbot/gwpopulation https://github.com/ColmTalbot/GPUCBC https://github.com/ADACS-Australia/ADACS-SS18A-RSmith Add demonstration of improvement in BNS spi

    Structural Verification of the Redesigned Space Shuttle Bipod Foam Closeout

    Get PDF
    This document outlines the structural verification approach for the Space Shuttle External Tank Forward Bipod Foam Closeout. Due to the Space Shuttle Columbia accident, debris has become a major concern. The intent of the structural verification is to ensure that any debris shed from the bipod is within acceptable limits. Since cohesive failure due to internal defects was identified as the most likely cause of the STS-107 bipod ramp foam failure, verification for this failure mode receives particular emphasis. However, all failure modes for TPS are considered and appropriate verification rationale is developed for each failure mode. Figure 1 depicts the structural verification of a production design where analysis and test are the primary methods of verification. It can be seen that the successful completion of structural verification is dependent on three main areas: 1. Production process control and quality assurance must ensure that test articles and/or analytical models are representative of (or conservatively envelope) production hardware in terms of geometry, materials and processing. Variability and defects must be considered. 2. Flight environments must be sufficiently characterized to bound driving environments for all failure modes. Applied environments, either test or analytical, must be representative of flight environments and have a load factor that satisfies design requirements. 3. Structural verification must include all failure modes. A comprehensive list of failure modes and the underlying failure mechanisms has been generated based on flight and test experience. Verification tests and / or analyses must address each failure mode. ET TPS Verification is accomplished by a combination of analysis, test, and similarity

    Small sample multiple testing with application to cDNA microarray data

    Get PDF
    Many tests have been developed for comparing means in a two-sample scenario. Microarray experiments lead to thousands of such comparisons in a single study. Several multiple testing procedures are available to control experiment-wise error or the false discovery rate. In this dissertation, individual two-sample tests are compared based on accuracy, correctness, and power. Four multiple testing procedures are compared via simulation, based on data from the lab of Dr. Rajesh Miranda. The effect of sample size on power is also carefully examined. The two sample t-test followed by the Benjamini and Hochberg (1995) false discovery rate controlling procedure result in the highest power

    What is the Final Verification of Engineering Requirements?

    Get PDF
    This slide presentation reviews the process of development through the final verification of engineering requirements. The definition of the requirements is driven by basic needs, and should be reviewed by both the supplier and the customer. All involved need to agree upon a formal requirements including changes to the original requirements document. After the requirements have ben developed, the engineering team begins to design the system. The final design is reviewed by other organizations. The final operational system must satisfy the original requirements, though many verifications should be performed during the process. The verification methods that are used are test, inspection, analysis and demonstration. The plan for verification should be created once the system requirements are documented. The plan should include assurances that every requirement is formally verified, that the methods and the responsible organizations are specified, and that the plan is reviewed by all parties. The options of having the engineering team involved in all phases of the development as opposed to having some other organization continue the process once the design has been complete is discussed
    • …
    corecore