6,480 research outputs found

    Fletcher-Turek Model Averaged Profile Likelihood Confidence Intervals

    Get PDF
    We evaluate the model averaged profile likelihood confidence intervals proposed by Fletcher and Turek (2011) in a simple situation in which there are two linear regression models over which we average. We obtain exact expressions for the coverage and the scaled expected length of the intervals and use these to compute these quantities in particular situations. We show that the Fletcher-Turek confidence intervals can have coverage well below the nominal coverage and expected length greater than that of the standard confidence interval with coverage equal to the same minimum coverage. In these situations, the Fletcher-Turek confidence intervals are unfortunately not better than the standard confidence interval used after model selection but ignoring the model selection process

    Designing experiments for an application in laser and surface Chemistry

    No full text
    We consider the design used to collect data for a Second Harmonic Generation (SHG) experiment, where the behaviour of interfaces between two phases, for example the surface of a liquid, is investigated. These studies have implications in surfactants, catalysis, membranes and electrochemistry. Ongoing work will be described in designing experiments to investigate nonlinear models used to represent the data, relating the intensity of the SHG signal to the polarisation angles of the polarised light beam. The choice of design points and their effect on parameter estimates is investigated. Various designs and the current practice of using equal-spaced levels are investigated, and their relative merits compared on the basis of the overall aim of the chemical study

    Likelihood inference for small variance components

    No full text
    The authors explore likelihood-based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum likelihood estimates and that the likelihood ratio test based on the local asymptotic approximation has higher power than the likelihood ratio test based on the usual chi-squared approximation. They examine the finite sample properties of the proposed intervals by means of a simulation study

    Properties of a square root transformation regression model

    Get PDF
    We consider the problem of modelling the conditional distribution of a response given a vector of covariates x when the response is a compositional data vector u. That is, u is defined on the unit simplex [...] This definition of the unit simplex differs subtly from that of Aitchison (1982), as we relax the con- dition that the components of u must be strictly positive. Under this scenario, use of the ratio (or logratio) to compare different compositions is not ideal since it is undefined in some instances, and subcompositional analysis is also not appropriate due to the possibility of division by zero. It has long been recognised that the square root transformation [...] transforms compositional data (including zeros) onto the surface of the (p-1)-dimensional hyperspher

    The Fallacy of Averages

    Get PDF
    This is the publisher's version, also available electronically from http://www.jstor.org/stable/2461871?seq=1#page_scan_tab_contents.No abstract is available for this item

    Logarithmic corrections in the free energy of monomer-dimer model on plane lattices with free boundaries

    Full text link
    Using exact computations we study the classical hard-core monomer-dimer models on m x n plane lattice strips with free boundaries. For an arbitrary number v of monomers (or vacancies), we found a logarithmic correction term in the finite-size correction of the free energy. The coefficient of the logarithmic correction term depends on the number of monomers present (v) and the parity of the width n of the lattice strip: the coefficient equals to v when n is odd, and v/2 when n is even. The results are generalizations of the previous results for a single monomer in an otherwise fully packed lattice of dimers.Comment: 4 pages, 2 figure

    A random-effects hurdle model for predicting bycatch of endangered marine species

    Get PDF
    Understanding and reducing the incidence of accidental bycatch, particularly for vulnerable species such as sharks, is a major challenge for contemporary fisheries management worldwide. Bycatch data, most often collected by at-sea observers during fishing trips, are clustered by trip and/or vessel and typically involve a large number of zero counts and very few positive counts. Though hurdle models are very popular for count data with excess zeros, models for clustered forms have received far less attention. Here we present a novel random-effects hurdle model for bycatch data that makes available accurate estimates of bycatch probabilities as well as other clusterspecific targets. These are essential for informing conservation and management decisions as well as for identifying bycatch hotspots, often considered the first step in attempting to protect endangered marine species. We validate our methodology through simulation and use it to analyze bycatch data on critically endangered hammerhead sharks from the U.S. National Marine Fisheries Service Pelagic Observer Program

    Fitting and Interpreting Occupancy Models

    No full text
    We show that occupancy models are more difficult to fit than is generally appreciated because the estimating equations often have multiple solutions, including boundary estimates which produce fitted probabilities of zero or one. The estimates are unstable when the data are sparse, making them difficult to interpret, and, even in ideal situations, highly variable. As a consequence, making accurate inference is difficult. When abundance varies over sites (which is the general rule in ecology because we expect spatial variance in abundance) and detection depends on abundance, the standard analysis suffers bias (attenuation in detection, biased estimates of occupancy and potentially finding misleading relationships between occupancy and other covariates), asymmetric sampling distributions, and slow convergence of the sampling distributions to normality. The key result of this paper is that the biases are of similar magnitude to those obtained when we ignore non-detection entirely. The fact that abundance is subject to detection error and hence is not directly observable, means that we cannot tell when bias is present (or, equivalently, how large it is) and we cannot adjust for it. This implies that we cannot tell which fit is better: the fit from the occupancy model or the fit ignoring the possibility of detection error. Therefore trying to adjust occupancy models for non-detection can be as misleading as ignoring non-detection completely. Ignoring non-detection can actually be better than trying to adjust for it.Funding was received from the Australian Research Council to support this research. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscrip
    corecore