27,102 research outputs found

    Second order ancillary: A differential view from continuity

    Full text link
    Second order approximate ancillaries have evolved as the primary ingredient for recent likelihood development in statistical inference. This uses quantile functions rather than the equivalent distribution functions, and the intrinsic ancillary contour is given explicitly as the plug-in estimate of the vector quantile function. The derivation uses a Taylor expansion of the full quantile function, and the linear term gives a tangent to the observed ancillary contour. For the scalar parameter case, there is a vector field that integrates to give the ancillary contours, but for the vector case, there are multiple vector fields and the Frobenius conditions for mutual consistency may not hold. We demonstrate, however, that the conditions hold in a restricted way and that this verifies the second order ancillary contours in moderate deviations. The methodology can generate an appropriate exact ancillary when such exists or an approximate ancillary for the numerical or Monte Carlo calculation of pp-values and confidence quantiles. Examples are given, including nonlinear regression and several enigmatic examples from the literature.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ248 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    The Company You Keep: Qualitative Uncertainty in Providing Club Goods

    Get PDF
    Clubs are typically experience goods. Potential members cannot ascertain precisely beforehand their quality (dependent endogenously on the club's facility investment and number of users, itself dependent on its pricing policy). Members with unsatisfactory initial experiences discontinue visits. We show that a monopoly profit maximiser never offers a free trial period for such goods but, for a quality function homogeneous of any feasible degree, a welfare maximiser always does. When the quality function is homogeneous of degree zero, the monopolist provides a socially excessive level of quality to repeat buyers. In other possible regimes, the monopolist permits too little club usage.Clubs, qualitative uncertainty, monopoly, welfarist

    An evolutionarily stable joining policy for group foragers

    Get PDF
    For foragers that exploit patchily distributed resources that are challenging to locate, detecting discoveries made by others with a view to joining them and sharing the patch may often be an attractive tactic, and such behavior has been observed across many taxa. If, as will commonly be true, the time taken to join another individual on a patch increases with the distance to that patch, then we would expect foragers to be selective in accepting joining opportunities: preferentially joining nearby discoveries. If competition occurs on patches, then the profitability of joining (and of not joining) will be influenced by the strategies adopted by others. Here we present a series of models designed to illuminate the evolutionarily stable joining strategy. We confirm rigorously the previous suggestion that there should be a critical joining distance, with all joining opportunities within that distance being accepted and all others being declined. Further, we predict that this distance should be unaffected by the total availability of food in the environment, but should increase with decreasing density of other foragers, increasing speed of movement towards joining opportunities, increased difficulty in finding undiscovered food patches, and decreasing speed with which discovered patches can be harvested. We are further able to make predictions as to how fully discovered patches should be exploited before being abandoned as unprofitable, with discovered patches being more heavily exploited when patches are hard to find: patches can be searched for remaining food more quickly, forager density is low, and foragers are relatively slow in traveling to discovered patches

    Effect of Preventive Home Visits by a Nurse on the Outcomes of Frail Elderly People in the Community: a randomized controlled trial

    Get PDF
    Background: Timely recognition and prevention of health problems among elderly people have been shown to improve their health. In this randomized controlled trial the authors examined the impact of preventive home visits by a nurse compared with usual care on the outcomes of frail elderly people living in the community. Methods: A screening questionnaire identified eligible participants (those aged 70 years or more at risk of sudden deterioration in health). Those randomly assigned to the visiting nurse group were assessed and followed up in their homes for 14 months. The primary outcome measure was the combined rate of deaths and admissions to an institution, and the secondary outcome measure the rate of health services utilization, during the 14 months; these rates were determined through a medical chart audit by a research nurse who was blind to group allocation. Results: The questionnaire was mailed to 415 elderly people, of whom 369 (88.9%) responded. Of these, 198 (53.7%) were eligible, and 142 consented to participate and were randomly assigned to either the visiting nurse group (73) or the usual care group (69). The combined rate of deaths and admissions to an institution was 10.0% in the visiting nurse group and 5.8% in the usual care group (p = 0.52). The rate of health services utilization did not differ significantly between the 2 groups. Influenza and pneumonia vaccination rates were significantly higher in the visiting nurse group (90.1% and 81.9%) than in the usual care group (53.0% and 0%) (p \u3c 0.001). Interpretation: The trial failed to show any effect of a visiting nurse other than vastly improved vaccination coverage

    Higher Accuracy for Bayesian and Frequentist Inference: Large Sample Theory for Small Sample Likelihood

    Full text link
    Recent likelihood theory produces pp-values that have remarkable accuracy and wide applicability. The calculations use familiar tools such as maximum likelihood values (MLEs), observed information and parameter rescaling. The usual evaluation of such pp-values is by simulations, and such simulations do verify that the global distribution of the pp-values is uniform(0, 1), to high accuracy in repeated sampling. The derivation of the pp-values, however, asserts a stronger statement, that they have a uniform(0, 1) distribution conditionally, given identified precision information provided by the data. We take a simple regression example that involves exact precision information and use large sample techniques to extract highly accurate information as to the statistical position of the data point with respect to the parameter: specifically, we examine various pp-values and Bayesian posterior survivor ss-values for validity. With observed data we numerically evaluate the various pp-values and ss-values, and we also record the related general formulas. We then assess the numerical values for accuracy using Markov chain Monte Carlo (McMC) methods. We also propose some third-order likelihood-based procedures for obtaining means and variances of Bayesian posterior distributions, again followed by McMC assessment. Finally we propose some adaptive McMC methods to improve the simulation acceptance rates. All these methods are based on asymptotic analysis that derives from the effect of additional data. And the methods use simple calculations based on familiar maximizing values and related informations. The example illustrates the general formulas and the ease of calculations, while the McMC assessments demonstrate the numerical validity of the pp-values as percentage position of a data point. The example, however, is very simple and transparent, and thus gives little indication that in a wide generality of models the formulas do accurately separate information for almost any parameter of interest, and then do give accurate pp-value determinations from that information. As illustration an enigmatic problem in the literature is discussed and simulations are recorded; various examples in the literature are cited.Comment: Published in at http://dx.doi.org/10.1214/07-STS240 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Mass segregation trends in SDSS galaxy groups

    Full text link
    It has been shown that galaxy properties depend strongly on their host environment. In order to understand the relevant physical processes driving galaxy evolution it is important to study the observed properties of galaxies in different environments. Mass segregation in bound galaxy structures is an important indicator of evolutionary history and dynamical friction timescales. Using group catalogues derived from the Sloan Digital Sky Survey Data Release 7 (SDSS DR7) we investigate mass segregation trends in galaxy groups at low redshift. We investigate average galaxy stellar mass as a function of group-centric radius and find evidence for weak mass segregation in SDSS groups. The magnitude of the mass segregation depends on both galaxy stellar mass limits and group halo mass. We show that the inclusion of low mass galaxies tends to strengthen mass segregation trends, and that the strength of mass segregation tends to decrease with increasing group halo mass. We find the same trends if we use the fraction of massive galaxies as a function of group-centric radius as an alternative probe of mass segregation. The magnitude of mass segregation that we measure, particularly in high-mass haloes, indicates that dynamical friction is not acting efficiently.Comment: 6 pages, 2 figures, accepted for publication in MNRAS Letter
    corecore