2,683 research outputs found

    Information-adaptive clinical trials: a selective recruitment design

    Full text link
    We propose a novel adaptive design for clinical trials with time-to-event outcomes and covariates (which may consist of or include biomarkers). Our method is based on the expected entropy of the posterior distribution of a proportional hazards model. The expected entropy is evaluated as a function of a patient's covariates, and the information gained due to a patient is defined as the decrease in the corresponding entropy. Candidate patients are only recruited onto the trial if they are likely to provide sufficient information. Patients with covariates that are deemed uninformative are filtered out. A special case is where all patients are recruited, and we determine the optimal treatment arm allocation. This adaptive design has the advantage of potentially elucidating the relationship between covariates, treatments, and survival probabilities using fewer patients, albeit at the cost of rejecting some candidates. We assess the performance of our adaptive design using data from the German Breast Cancer Study group and numerical simulations of a biomarker validation trial

    The Effect of Migration on Earnings and Welfare Benefit Receipt

    Get PDF
    This paper analyzes the outcomes of single mothers who move. I find that the earnings of single mother movers decline sharply relative to stayers in the years before moving. Based on this evidence, I propose a model in which individuals migrate in order to break away from persistent negative earnings shocks. On average, the wage earner migrants increase their expected earnings and income nineteen percent by migrating. Of the women who primarily receive welfare benefits, most change their earnings and income little by migrating.

    The Critical Coupling Likelihood Method: A new approach for seamless integration of environmental and operating conditions of gravitational wave detectors into gravitational wave searches

    Get PDF
    Any search effort for gravitational waves (GW) using interferometric detectors like LIGO needs to be able to identify if and when noise is coupling into the detector's output signal. The Critical Coupling Likelihood (CCL) method has been developed to characterize potential noise coupling and in the future aid GW search efforts. By testing two hypotheses about pairs of channels, CCL is able to identify undesirable coupled instrumental noise from potential GW candidates. Our preliminary results show that CCL can associate up to 80\sim 80% of observed artifacts with SNR8SNR \geq 8, to local noise sources, while reducing the duty cycle of the instrument by 15\lesssim 15%. An approach like CCL will become increasingly important as GW research moves into the Advanced LIGO era, going from the first GW detection to GW astronomy.Comment: submitted CQ

    RECOVERING LOCALIZED INFORMATION ON AGRICULTURAL STRUCTURE UNDERLYING DATA CONFIDENTIALITY REGULATIONS - POTENTIALS OF DIFFERENT DATA AGGREGATION AND SEGREGATION TECHNIQUES

    Get PDF
    The modelling and information system RAUMIS is used for policy impact assessment to measure the impact of agriculture on the environment. The county level resolution often limits the analysis and a further disaggregation at the municipality level would reduce aggregation bias and improve the assessment. Although the necessary data exists in Germany, data protection rules (DPR) prohibit their direct use. With methods such as the Locally Weighted Averages (LWA), and with aggregation singling production activities into larger groups of activities, the data at the municipality level can be made publicly available. However, this reduces the information content and introduces an additional error. This paper’s aim is to investigate how much information is necessary to satisfactorily estimate Germany-wide production activity levels at the municipality level and whether the data requirements are still in compliance with the DPR. We apply Highest Posterior Density (HPD) estimation, which is easily able to include sample information as prior. We tested different prior information content at the municipality level. However, the goodness of the developed estimation approach can only be evaluated having knowledge about the population. Because the real population is not known to us, we took advantage of the special situation in Bavaria and derived a pseudo population for that region. This is used to draw information conforming to DPR for our estimation and to evaluate the resulting estimates. We found that the proposed approach is capable of adequately estimating most activities without violating the DPR. These findings allow us to extend the approach towards the Germany-wide municipality coverage in RAUMIS.Highest Posterior Density estimator (HPD), RAUMIS, locally weighted average (LWA), Research Methods/ Statistical Methods,

    Salvage the treasure of geographic information in Farm census data

    Get PDF
    In Germany, since several decades the RAUMIS modelling system is applied for policy impact assessments to measure the impact of agriculture on the environment. A disaggregation at the municipality level with more than 9.600 administrative units, instead of currently used 316 counties, would tremendously improve the environmental impact analysis. Two sets of data are used for this purpose. The first are geo-referenced data, that are, however, incomplete with respect its coverage of production activities in agriculture. The second set is the micro census statistic itself, that has a full coverage, but data protection rules (DPR) prohibit its straightforward use. The paper show how this bottleneck can be passed to obtain a reliable modelling data set at municipality level with a complete coverage of the agricultural sector in Germany. We successfully applied a Bayesian estimator, that uses prior information derived a cluster analysis based on the micro census and GIS information. Our test statistics of the estimation, calculated by the statistical office, comparing our estimates and the real protected data, reveals that the proposed approach adequately estimates most activities and can be used to fed the municipality layer in the RAUMIS modelling system for an extended policy analysis.Highest Posterior Density estimator (HPD), RAUMIS, Down scaling, Research Methods/ Statistical Methods, C11, C61, C81, Q15,

    Nonseparable sample selection models with censored selection rules: an application to wage decompositions

    Full text link
    We consider identification and estimation of nonseparable sample selection models with censored selection rules. We employ a control function approach and discuss different objects of interest based on (1) local effects conditional on the control function, and (2) global effects obtained from integration over ranges of values of the control function. We provide conditions under which these objects are appropriate for the total population. We also present results regarding the estimation of counterfactual distributions. We derive conditions for identification for these different objects and suggest strategies for estimation. We also provide the associated asymptotic theory. These strategies are illustrated in an empirical investigation of the determinants of female wages and wage growth in the United Kingdom.https://arxiv.org/abs/1801.08961First author draf

    CONSISTENT ESTIMATION OF LONGITUDINAL CENSORED DEMAND SYSTEMS

    Get PDF
    In this paper we derive a joint continuous/censored demand system suitable for the analysis of commodity demand relationships using panel data. Unobserved heterogeneity is controlled for using a correlated random effects specification and a Generalized Method of Moments framework used to estimate the model in two stages. While relatively small differences in elasticity estimates are found between a flexible specification and one that restricts the relationship between the random effect and budget shares to be time invariant, larger differences are observed between the most flexible random effects model and a pooled cross sectional estimator. The results suggest the limited ability of such estimators to control for preference heterogeneity and unit value endogeneity leads to parameter bias.Research Methods/ Statistical Methods,

    Pattern, Trend and Determinants of Crop Diversification: Empirical Evidence from Smallholders in Eastern Ethiopia

    Get PDF
    Crop diversification is the most important risk management strategies. The study investigated the pattern, trend and covariates of crop diversification in eastern Ethiopia based on data collected from 167 households randomly and proportionately selected. In order to manage risks of drought, pests and diseases, soil fertility decline and input prices variations, farmers in the study areas employ crop diversification as a self-insuring strategy. The farmers are becoming risk-averse which has implications on technology adoption. Tobit model result indicated that farmers with more extension contacts and larger livestock size are likely to specialize whereas those who have access to market information and irrigation, those who own machinery and more number of farm plots are more likely to diversify. In order to promote crop diversification, providing farm machinery through easy loans and improving access to market information and irrigation should be given attention. The extension system should include risk-minimization as a strategy. Keywords: crop diversification, risk, risk management strategies, risk-averse, Ethiopia

    Local Variation as a Statistical Hypothesis Test

    Full text link
    The goal of image oversegmentation is to divide an image into several pieces, each of which should ideally be part of an object. One of the simplest and yet most effective oversegmentation algorithms is known as local variation (LV) (Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and show that algorithms similar to LV can be devised by applying different statistical models and decisions, thus providing further theoretical justification and a well-founded explanation for the unexpected high performance of the LV approach. Some of these algorithms are based on statistics of natural images and on a hypothesis testing decision; we denote these algorithms probabilistic local variation (pLV). The best pLV algorithm, which relies on censored estimation, presents state-of-the-art results while keeping the same computational complexity of the LV algorithm

    Agglomeration, related variety and vertical integration

    Get PDF
    Several recent studies investigate the relation between geographic concentration of production and vertical integration, based on the hypothesis that spatial agglomeration of firms in the same industry facilitates input procurement thereby reducing the degree of vertical integration. The present paper contributes to this debate by also considering the effects of industry variety at the local level. Specifically, we consider two forms of variety: unrelated variety and vertically related variety. The latter index is constructed using information drawn from input-output tables and captures the opportunities for outsourcing within the local system. We consider inter-industry vertical integration by taking account of the ownership of activities with input-output linkages. Using a dataset of 24,663 Italian business groups in 2001, we estimate Tobit models to investigate the influence of vertically related variety and other agglomeration forces on the degree of vertical integration of groups. Our evidence confirms that vertical integration is influenced by industry specialization at the local level. We also find that the higher the vertically related variety, the lower the need for firms to integrate activities since they have more opportunities to acquire intermediate goods and services within the local system.vertical integration, agglomeration, related-variety, business group
    corecore