127 research outputs found

    Improving the chi-squared approximation for bivariate normal tolerance regions

    Get PDF
    Let X be a two-dimensional random variable distributed according to N2(mu,Sigma) and let bar-X and S be the respective sample mean and covariance matrix calculated from N observations of X. Given a containment probability beta and a level of confidence gamma, we seek a number c, depending only on N, beta, and gamma such that the ellipsoid R = (x: (x - bar-X)'S(exp -1) (x - bar-X) less than or = c) is a tolerance region of content beta and level gamma; i.e., R has probability gamma of containing at least 100 beta percent of the distribution of X. Various approximations for c exist in the literature, but one of the simplest to compute -- a multiple of the ratio of certain chi-squared percentage points -- is badly biased for small N. For the bivariate normal case, most of the bias can be removed by simple adjustment using a factor A which depends on beta and gamma. This paper provides values of A for various beta and gamma so that the simple approximation for c can be made viable for any reasonable sample size. The methodology provides an illustrative example of how a combination of Monte-Carlo simulation and simple regression modelling can be used to improve an existing approximation

    A statistical test procedure for detecting multiple outliers in a data set

    Get PDF
    There are no author-identified significant results in this report

    Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    Get PDF
    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer

    Error analysis of leaf area estimates made from allometric regression models

    Get PDF
    Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing)

    Combining Information on Multiple Detection Techniques to Estimate the Effect of Patent Foramen Ovale on Susceptibility to Decompression Illness

    Get PDF
    The assembly and the maintenance of the International Space Station is expected to require hundreds of extravehicular excursions (EVA's) in the next 10 years. During an EVA, in order to allow movement and bending of limbs, spacesuit pressures are reduced to about 4.3 psi. as compared with about 14.7 psi. for normal atmospheric pressure at sea level. However, the exposure of astronauts to reduced pressures in spacesuits, is conducive to fonnation and growth of gas bubbles within venous blood or tissues, which could cause decompression illness (DCI), a pathology best known to occur among deep-sea divers when they return to the surface. To reduce the risk of DCI, astronauts adjust to the reduced pressure in stages for a prolonged time known as a "pre-breathe" period prior to their extravehicular activity. Despite the use of pre-breathe protocols, an increased risk of DCI can arise for about 25% of humans who have a small hole, known as a patent foramen ovale (PFO), between two chambers of the heart. The atrial septum's fossa oval is, an embryological remnant of a flap between the septae primum and secundum allows fetal right atrial blood to pass into the left atrium, and usually closes after birth (Hagen, et al,. 1984). If fusion does not occur, a valve-like opening, the foramen ovale persists between the two atria. It has been suggested that astronauts with PFO's might be at greater risk of stroke or other serious neurological DCI because bubbles from a venous site may traverse a PFO, travel to the aorta and then enter the cerebral circulatory system causing a stroke (Figure 1). Astronauts are not now screened for PFO's, however consideration is being given to doing so. Here, we study three main methods abbreviated here as "ITE", "TCD" and "TEE", for detecting PFO's in living subjects. All involve the introduction of bubbles into a vein, immediately after which a sensory probe attempts to detect the bubbles in systemic circulation. Presence of the injected bubbles in the systemic circulation is indicative of a PFO. More detailed descriptions are given after the explanation of PFO's under Figure I. Even if a true PFO affects the risk of DCI, there remains a question of how effective screening would be if the detection method has errors of omission and/or commission. Of the three methods studied here, TEE is the "gold standard", matching autopsy results with near-perfect sensitivity and specificity (Schneider, et al. , 1996). However TEE is also the most difficult method to implement, requiring an internal esophagal probe, and is therefore not widely used. Currently, the easiest to use and most common PFO detection method is TTE, which uses an external chest probe. This method has a specificity of near 100%, but suffers from a low sensitivity rate (about 30%). More recently, TCD has been developed, which uses ultrasound probes to detect the presence of bubbles in cerebral arteries. Studies indicate that TCD is quite effective, having a sensitivity of about 91% and a specificity of about 93% (Droste, et al., 1999) when applied correctly, however implementation is difficult and requires considerable training

    Astronaut Bone Medical Standards Derived from Finite Element (FE) Models of QCT Scans from Population Studies

    Get PDF
    This work was accomplished in support of the Finite Element [FE] Strength Task Group, NASA Johnson Space Center [JSC], Houston, TX. This group was charged with the task of developing rules for using finite-element [FE] bone-strength measures to construct operating bands for bone health that are relevant to astronauts following exposure to spaceflight. FE modeling is a computational tool used by engineers to estimate the failure loads of complex structures. Recently, some engineers have used this tool to characterize the failure loads of the hip in population studies that also monitored fracture outcomes. A Directed Research Task was authorized in July, 2012 to investigate FE data from these population studies to derive these proposed standards of bone health as a function of age and gender. The proposed standards make use of an FE-based index that integrates multiple contributors to bone strength, an expanded evaluation that is critical after an astronaut is exposed to spaceflight. The current index of bone health used by NASA is the measurement of areal BMD. There was a concern voiced by a research and clinical advisory panel that the sole use of areal BMD would be insufficient to fully evaluate the effects of spaceflight on the hip. Hence, NASA may not have a full understanding of fracture risk, both during and after a mission, and may be poorly estimating in-flight countermeasure efficacy. The FE Strength Task Group - composed of principal investigators of the aforementioned population studies and of FE modelers -donated some of its population QCT data to estimate of hip bone strength by FE modeling for this specific purpose. Consequently, Human Health Countermeasures [HHC] has compiled a dataset of FE hip strengths, generated by a single FE modeling approach, from human subjects (approx.1060) with ages covering the age range of the astronauts. The dataset has been analyzed to generate a set of FE strength cutoffs for the following scenarios: a) Qualify an applicant for astronaut candidacy, b) Qualify an astronaut for a long-duration (LD) mission, c) Qualify a veteran LD astronaut for a second LD mission, and d) Establish a non-permissible, minimum hip strength following a given mission architecture. This abstract will present the FE-based standards accepted by the FE Strength Task Group for its recommendation to HHC in January 2015

    Estimation of percentage points and the construction of tolerance limits

    Get PDF
    Estimation of percentage points and construction of tolerance limit

    Informal Statistics Help Desk

    Get PDF
    Back by popular demand, the JSC Biostatistics Lab is offering an opportunity for informal conversation about challenges you may have encountered with issues of experimental design, analysis, data visualization or related topics. Get answers to common questions about sample size, repeated measures, violation of distributional assumptions, missing data, multiple testing, time-to-event data, when to trust the results of your analyses (reproducibility issues) and more

    Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    Get PDF
    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R~, the correlation matrix of associated t-statistics is also studied. In general R~ depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons
    corecore