210,923 research outputs found

    Behaviour of Dickey-Fuller Unit Root Tests Under Trend Misspecification

    Get PDF
    We analyse the case where a unit root test is based on a Dickey-Fuller regression whose only deterministic term is a fixed intercept. Suppose, however, as could well be the case, that the actual data generating process includes a broken linear trend. It is shown theoretically, and verified empirically, that under the I(1) null and I(0) alternative hypotheses the Dickey-Fuller test can display a wide range of different characteristics depending on the nature and location of the break.

    The radial plot in meta-analysis : approximations and applications

    Get PDF
    Fixed effects meta-analysis can be thought of as least squares analysis of the radial plot, the plot of standardized treatment effect against precision (reciprocal of the standard deviation) for the studies in a systematic review. For example, the least squares slope through the origin estimates the treatment effect, and a widely used test for publication bias is equivalent to testing the significance of the regression intercept. However, the usual theory assumes that the within-study variances are known, whereas in practice they are estimated. This leads to extra variability in the points of the radial plot which can lead to a marked distortion in inferences that are derived from these regression calculations. This is illustrated by a clinical trials example from the Cochrane database. We derive approximations to the sampling properties of the radial plot and suggest bias corrections to some of the commonly used methods of meta-analysis. A simulation study suggests that these bias corrections are effective in controlling levels of significance of tests and coverage of confidence intervals

    Policy Relevant Heterogeneity in the Value of Statistical Life: New Evidence from Panel Data Quantile Regressions

    Get PDF
    We examine differences in the value of statistical life (VSL) across potential wage levels in panel data using quantile regressions with intercept heterogeneity. Latent heterogeneity is econometrically important and affects the estimated VSL. Our findings indicate that a reasonable average cost per expected life saved cut-off for health and safety regulations is 7millionto7 million to 8 million per life saved, but the VSL varies considerably across the labor force. Our results reconcile the previous discrepancies between hedonic VSL estimates and the values implied by theories linked to the coefficient of relative risk aversion. Because the VSL varies elastically with income, regulatory agencies should regularly update the VSL used in benefit assessments, increasing the VSL proportionally with changes in income over time.panel data, quantile regression, VSL, value of statistical life, fixed effects, PSID, fatality risk, CFOI

    The Probability Distribution of the Response Times in Self-paced Continuous Search Tasks

    Get PDF
    When psychologists began to use intelligence tests, they used simple, overlearned tasks to determine the pattern of individual reaction times (RT). Measures of RT variation were proposed as possible indicators of intelligence. However, a fundamental question has remained partly unanswered: Is there an existing theory that explains individual RT variation? In this paper, a theory is proposed for the response times obtained in the Attention Concentration Test. The test consists of two different conditions: a fixed condition and a random condition. For each of these conditions a different RT model was developed both based on the basic assumption, that the individual response times have an approximately shifted exponential distribution. Empirical data were obtained from two different samples (N = 362, N = 334) of Finnish students. The method used to check the validity of each model involved computing the intercept and slope of the linear regression of the standard deviation from the stationary response times on the mean corrected for shift. In this regression analysis, the standard deviation is the dependent variable and the mean corrected for the shift the independent variable. The shift parameter was estimated by using the smallest reaction time. The observed intercept and slope were compared with the predicted intercept and slope according to the proposed models. The model for the fixed condition of the test did not hold. The model for the random condition, however, did. The findings were interpreted according to the arrangement of the targets as they occurred in each bar.When psychologists began to use intelligence tests, they also used simple, overlearned tasks to determine the pattern of individual reaction times (RT). Measures of RT variation were proposed as possible indicators of intelligence. However, a fundamental question has remained partly unanswered: Is there an existing theory that explains individual RT variation? In this paper, a theory is proposed for the response times obtained in the Attention Concentration Test. The test consists of two different conditions: a fixed condition and a random condition. For each of these two conditions a different RT model was developed both based on the basic assumption that the individual response times have an approximately shifted exponential distribution. Empirical data was obtained from two different samples (N = 362, N = 334) of Finnish students. The method used to check the validity of each model involved computing the intercept and slope of the linear regression of the standard deviation from the stationary response times on the mean corrected for shift. In this regression analysis, the standard deviation is the dependent variable and the mean corrected for shift the independent variable. The shift parameter was estimated by using the smallest reaction time. The observed intercept and slope were compared with the predicted intercept and slope according to the proposed models. The model for the fixed condition of the test did not hold. The model for the random condition,however, did. The findings were interpreted according to the arrangement of the targets as they occurred in each bar.Peer reviewe

    Effects of trophic status, water level, and temperature on shallow lake metabolism and metabolic balance: A standardized pan‐European mesocosm experiment

    Get PDF
    Important drivers of gross primary production (GPP) and ecosystem respiration (ER) in lakes are temperature, nutrients, and light availability, which are predicted to be affected by climate change. Little is known about how these three factors jointly influence shallow lakes metabolism and metabolic status as net heterotrophic or autotrophic. We conducted a pan‐European standardized mesocosm experiment covering a temperature gradient from Sweden to Greece to test the differential temperature sensitivity of GPP and ER at two nutrient levels (mesotrophic or eutrophic) crossed with two water levels (1 m and 2 m) to simulate different light regimes. The findings from our experiment were compared with predictions made according the metabolic theory of ecology (MTE). GPP and ER were significantly higher in eutrophic mesocosms than in mesotrophic ones, and in shallow mesocosms compared to deep ones, while nutrient status and depth did not interact. The estimated temperature gains for ER of ~ 0.62 eV were comparable with those predicted by MTE. Temperature sensitivity for GPP was slightly higher than expected ~ 0.54 eV, but when corrected for daylight length, it was more consistent with predictions from MTE ~ 0.31 eV. The threshold temperature for the switch from autotrophy to heterotrophy was lower under mesotrophic (~ 11°C) than eutrophic conditions (~ 20°C). Therefore, despite a lack of significant temperature‐treatment interactions in driving metabolism, the mesocosm's nutrient level proved to be crucial for how much warming a system can tolerate before it switches from net autotrophy to net heterotrophy

    An approach for jointly modeling multivariate longitudinal measurements and discrete time-to-event data

    Full text link
    In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS339 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore