100 research outputs found

    A note on the use of the non-parametric Wilcoxon-Mann-Whitney test in the analysis of medical studies

    Get PDF
    Background: Although non-normal data are widespread in biomedical research, parametric tests unnecessarily predominate in statistical analyses

    Striving for Simple but Effective Advice for Comparing the Central Tendency of Two Populations

    Get PDF
    Nguyen et al. (2016) offered advice to researchers in the commonly-encountered situation where they are interested in testing for a difference in central tendency between two populations. Their data and the available literature support very simple advice that strikes the best balance between ease of implementation, power and reliability. Specifically, apply Satterthwaite’s test, with preliminary ranking of the data if a strong deviation from normality is expected, or is suggested by visual inspection of the data. This simple guideline will serve well except when dealing with small samples of discrete data, when more sophisticated treatment may be required

    Adaptive designs based on the truncated product method

    Get PDF
    BACKGROUND: Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more) stages and combines the p-values of the different stages using Fisher's combination test. METHODS: Alternatively to Fisher's test, the recently proposed truncated product method (TPM) can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. RESULTS: When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. CONCLUSION: It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design

    Multiplicity Study of Exoplanet Host Stars

    Full text link
    We present recent results of our ongoing multiplicity study of exoplanet host stars.Comment: 3 pages, 3 figure

    Narrow absorption features in the co-added XMM-Newton RGS spectra of isolated Neutron Stars

    Full text link
    We co-added the available XMM-Newton RGS spectra for each of the isolated X-ray pulsars RX\,J0720.4-3125, RX\,J1308.6+2127 (RBS\,1223), RX\,J1605.3+3249 and RX\,J1856.4-3754 (four members of the "Magnificent Seven") and the "Three Musketeers" Geminga, PSR\,B0656+14 and PSR\,B1055-52. We confirm the detection of a narrow absorption feature at 0.57 keV in the co-added RGS spectra of RX\,J0720.4-3125 and RX\,J1605.3+3249 (including most recent observations). In addition we found similar absorption features in the spectra of RX\,J1308.6+2127 (at 0.53 keV) and maybe PSR\,B1055-52 (at 0.56 keV). The absorption feature in the spectra of RX\,J1308.6+2127 is broader than the feature e.g. in RX\,J0720.4-3125. The narrow absorption features are detected with 2σ\sigma to 5.6σ\sigma significance. Although very bright and frequently observed, there are no absorption features visible in the spectra of RX\,J1856.4-3754 and PSR\,B0656+14, while the co-added XMM-Newton RGS spectrum of Geminga has not enough counts to detect such a feature. We discuss a possible origin of these absorption features as lines caused by the presence of highly ionised oxygen (in particular OVII and/or OVI at 0.57 keV) in the interstellar medium and absorption in the neutron star atmosphere, namely the absorption features at 0.57 keV as gravitational redshifted (grg_{r}=1.17) OVIII.Comment: 14 pages, 10 figures and 10 tables. Accepted for publication by MNRAS (Sep 12th, 2011

    Two-part permutation tests for DNA methylation and microarray data

    Get PDF
    BACKGROUND: One important application of microarray experiments is to identify differentially expressed genes. Often, small and negative expression levels were clipped-off to be equal to an arbitrarily chosen cutoff value before a statistical test is carried out. Then, there are two types of data: truncated values and original observations. The truncated values are not just another point on the continuum of possible values and, therefore, it is appropriate to combine two statistical tests in a two-part model rather than using standard statistical methods. A similar situation occurs when DNA methylation data are investigated. In that case, there are null values (undetectable methylation) and observed positive values. For these data, we propose a two-part permutation test. RESULTS: The proposed permutation test leads to smaller p-values in comparison to the original two-part test. We found this for both DNA methylation data and microarray data. With a simulation study we confirmed this result and could show that the two-part permutation test is, on average, more powerful. The new test also reduces, without any loss of power, to a standard test when there are no null or truncated values. CONCLUSION: The two-part permutation test can be used in routine analyses since it reduces to a standard test when there are positive values only. Further advantages of the new test are that it opens the possibility to use other test statistics to construct the two-part test and that it avoids the use of any asymptotic distribution. The latter advantage is particularly important for the analysis of microarrays since sample sizes are usually small

    Discovery of an OB Runaway Star Inside SNR S147

    Full text link
    We present first results of a long term study: Searching for OB--type runaway stars inside supernova remnants (SNRs). We identified spectral types and measured radial velocities (RV) by optical spectroscopic observations and we found an early type runaway star inside SNR S147. HD 37424 is a B0.5V type star with a peculiar velocity of 74±\pm8 km s1^{-1}. Tracing back the past trajectories via Monte Carlo simulations, we found that HD 37424 was located at the same position as the central compact object, PSR J0538+2817, 30 ⁣± ⁣430\!\pm\!4 kyr ago. This position is only \sim4 arcmin away from the geometrical center of the SNR. So, we suggest that HD 37424 was the pre--supernova binary companion to the progenitor of the pulsar and the SNR. We found a distance of 1333112+103^{+103}_{-112} pc to the SNR. The zero age main sequence progenitor mass should be greater than 13 MM_\odot. The age is 30±430\pm4 kyr and the total visual absorption towards the center is 1.28±\pm0.06 mag. For different progenitor masses, we calculated the pre--supernova binary parameters. The Roche Lobe radii suggest that it was an interacting binary in the late stages of the progenitor.Comment: Accepted to be published in MNRAS, 10 pages, 5 figure

    Unequal sample sizes according to the square-root allocation rule are useful when comparing several treatments with a control

    Get PDF
    A common situation in experimental science involves comparing a number of treatment groups each with a single reference (control group). For example, we might compare diameters of fungal colonies subject to a range of inhibitory agents with those from a control group to which no agent was applied. In this situation, the most commonly applied test is Dunnett's test, which compares each treatment group separately with the reference while controlling the experiment-wise Type I error rate. For analyses where all groups are treated equivalently, statistical power is generally optimised by dividing subjects equally across groups. Researchers often still use balanced groups in the situation where a single reference group is compared with each of the others. In this case, it is in fact optimal to spread subjects unequally: with the reference group getting a higher number of subjects (n0) than each of the k treatment groups (n in each case). It has been previously suggested that a simple rule of thumb, the so-called square-root allocation rule n0 = √kn, offers better power than a balanced design, without necessarily being optimal. Here, we show that this simple-to-apply rule offers substantial power gains (over a balanced design) over a broad range of circumstances and that the more-challenging-to-calculate optimal design often only offers minimal extra gain. Thus, we urge researchers to consider using the square-root allocation rule whenever one control group is compared with a number of treatments in the same experiment.PostprintPeer reviewe
    corecore