309,954 research outputs found

    On the consistency of a spatial-type interval-valued median for random intervals

    Full text link
    The sample dθd_\theta-median is a robust estimator of the central tendency or location of an interval-valued random variable. While the interval-valued sample mean can be highly influenced by outliers, this spatial-type interval-valued median remains much more reliable. In this paper, we show that under general conditions the sample dθd_\theta-median is a strongly consistent estimator of the dθd_\theta-median of an interval-valued random variable.Comment: 14 page

    Effects of Intermittent Reinforcement Upon Fixed-Ratio Discrimination

    Get PDF
    Four pigeons had discrimination training that required the choice of a left side-key following completion of a fixed-ratio 10 an the center key, and a right side-key response after fixed-ratio 20. Correct choices were reinforced on various fixed-interval, fixed-ratio, random-interval, and random-ratio schedules. When accuracy was examined across quarters of intervals (fixed-interval schedules) or quarters of median interreinforcerrent intervals (fixed-ratio schedules), accuracy was usually laver in the second quarter than in the first, third, or fourth quarters. When accuracy was examined across quarters of ratios (fixed-ratio schedules) or quarters of median number of correct interreinforcement trials (fixed-interval schedules), accuracy increased across quarters. These accuracy patterns did not occur m random-interval or random-ratio schedules. The results indicate that, when choice patterns differed on fixed-interval and fixed-ratio schedules, these differences were due to the methods of data analyses

    On smoothed analysis of quicksort and Hoare's find

    Get PDF
    We provide a smoothed analysis of Hoare's find algorithm, and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect or one-sided quicksort - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare’s find needs is Theta(n^2), the average-case number is Theta(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare's find needs Theta(n/(d+1) sqrt(n/d) + n) comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is Omega((1−p)n/p log n). Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare’s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule

    A spatial-type interval-valued median for random intervals

    Get PDF
    © 2018 Informa UK Limited, trading as Taylor & Francis Group. To estimate the central tendency or location of a sample of interval-valued data, a standard statistic is the interval-valued sample mean. Its strong sensitivity to outliers or data changes motivates the search for more robust alternatives. In this respect, a more robust location statistic is studied in this paper. This measure is inspired by the concept of spatial median and makes use of the versatile generalized Bertoluzza's metric between intervals, the so-called dθ distance. The problem of minimizing the mean dθ distance to the values the random interval takes, which defines the spatial-type dθ-median, is analysed. Existence and uniqueness of the sample version are shown. Furthermore, the robustness of this proposal is investigated by deriving its finite sample breakdown point. Finally, a real-life example from the Economics field illustrates the robustness of the sample dθ-median, and simulation studies show some comparisons with respect to the mean and several recently introduced robust location measures for interval-valued data.status: publishe

    Quality measures for soil surveys by lognormal kriging

    Get PDF
    If we know the variogram of a random variable then we can compute the prediction error variances (kriging variances) for kriged estimates of the variable at unsampled sites from sampling grids of different design and density. In this way the kriging variance is a useful pre-survey measure of the quality of statistical predictions, which can be used to design sampling schemes to achieve target quality requirements at minimal cost. However, many soil properties are lognormally distributed, and must be transformed to logarithms before geostatistical analysis. The predicted values on the log scale are then back-transformed. It is possible to compute the prediction error variance for a prediction by this lognormal kriging procedure. However, it does not depend only on the variogram of the variable and the sampling configuration, but also on the conditional mean of the prediction. We therefore cannot use the kriging variance directly as a pre-survey measure of quality for geostatistical surveys of lognormal variables. In this paper we present an alternative. First we show how the limits of a prediction interval for a variable predicted by lognormal kriging can be expressed as dimensionless quantities, proportions of the unknown median of the conditional distribution. This scaled prediction interval can be used as a presurvey quality measure since it depends only on the sampling configuration and the variogram of the log-transformed variable. Second, we show how a similar scaled prediction interval can be computed for the median value of a lognormal variable across a block, in the case of block kriging. This approach is then illustrated using variograms of lognormally distributed data on concentration of elements in the soils of a part of eastern England

    Parametric Model Based on Imputations Techniques for Partly Interval Censored Data

    Get PDF
    The term 'survival analysis' has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function. Published under licence by IOP Publishing Ltd.Scopu

    Clinical intervals and diagnostic characteristics in a cohort of prostate cancer patients in Spain: a multicentre observational study

    Get PDF
    Background: Little is known about the healthcare process for patients with prostate cancer, mainly because hospital-based data are not routinely published. The main objective of this study was to determine the clinical characteristics of prostate cancer patients, the diagnostic process and the factors that might influence intervals from consultation to diagnosis and from diagnosis to treatment. Methods: We conducted a multicentre, cohort study in seven hospitals in Spain. Patients' characteristics and diagnostic and therapeutic variables were obtained from hospital records and patients' structured interviews from October 2010 to September 2011. We used a multilevel logistic regression model to examine the association between patient care intervals and various variables influencing these intervals (age, BMI, educational level, ECOG, first specialist consultation, tumour stage, PSA, Gleason score, and presence of symptoms) and calculated the odds ratio (OR) and the interquartile range (IQR). To estimate the random inter-hospital variability, we used the median odds ratio (MOR). Results: 470 patients with prostate cancer were included. Mean age was 67.8 (SD: 7.6) years and 75.4 % were physically active. Tumour size was classified as T1 in 41.0 % and as T2 in 40 % of patients, their median Gleason score was 6.0 (IQR:1.0), and 36.1 % had low risk cancer according to the D'Amico classification. The median interval between first consultation and diagnosis was 89 days (IQR:123.5) with no statistically significant variability between centres. Presence of symptoms was associated with a significantly longer interval between first consultation and diagnosis than no symptoms (OR:1.93, 95%CI 1.29-2.89). The median time between diagnosis and first treatment (therapeutic interval) was 75.0 days (IQR:78.0) and significant variability between centres was found (MOR:2.16, 95%CI 1.45-4.87). This interval was shorter in patients with a high PSA value (p = 0.012) and a high Gleason score (p = 0.026). Conclusions: Most incident prostate cancer patients in Spain are diagnosed at an early stage of an adenocarcinoma. The period to complete the diagnostic process is approximately three months whereas the therapeutic intervals vary among centres and are shorter for patients with a worse prognosis. The presence of prostatic symptoms, PSA level, and Gleason score influence all the clinical intervals differently

    Effects of Magnesium Supplementation on Blood Pressure: A Meta-Analysis of Randomized Double-Blind Placebo-Controlled Trials

    Get PDF
    The antihypertensive effect of magnesium (Mg) supplementation remains controversial. We aimed to quantify the effect of oral Mg supplementation on blood pressure (BP) by synthesizing available evidence from randomized, double-blind, placebo-controlled trials. We searched trials of Mg supplementation on normotensive and hypertensive adults published up to February 1, 2016 from MEDLINE and EMBASE databases; 34 trials involving 2028 participants were eligible for this meta-analysis. Weighted mean differences of changes in BP and serum Mg were calculated by random-effects meta-analysis. Mg supplementation at a median dose of 368 mg/d for a median duration of 3 months significantly reduced systolic BP by 2.00 mm Hg (95% confidence interval, 0.43–3.58) and diastolic BP by 1.78 mm Hg (95% confidence interval, 0.73–2.82); these reductions were accompanied by 0.05 mmol/L (95% confidence interval, 0.03, 0.07) elevation of serum Mg compared with placebo. Using a restricted cubic spline curve, we found that Mg supplementation with a dose of 300 mg/d or duration of 1 month is sufficient to elevate serum Mg and reduce BP; and serum Mg was negatively associated with diastolic BP but not systolic BP (all P<0.05). In the stratified analyses, a greater reduction in BP tended to be found in trials with high quality or low dropout rate (all P values for interaction <0.05). However, residual heterogeneity may still exist after considering these possible factors. Our findings indicate a causal effect of Mg supplementation on lowering BPs in adults. Further well-designed trials are warranted to validate the BP-lowering efficacy of optimal Mg treatment
    corecore