11,651 research outputs found

    Panel Smooth Transition Regression Models

    Get PDF
    We develop a non-dynamic panel smooth transition regression model with fixed individual effects. The model is useful for describing heterogenous panels, with regression coefficients that vary across individuals and over time. Heterogeneity is allowed for by assuming that these coefficients are continuous functions of an observable variable through a bounded function of this variable and fluctuate between a limited number (often two) of “extreme regimes”. The model can be viewed as a generalization of the threshold panel model of Hansen (1999). We extend the modelling strategy for univariate smooth transition regression models to the panel context. This comprises of model specification based on homogeneity tests, parameter estimation, and diagnostic checking, including tests for parameter constancy and no remaining nonlinearity. The new model is applied to describe firms' investment decisions in the presence of capital market imperfections.financial constraints; heterogeneous panel; invesatment; misspecification test; nonlinear modelling panel data; smooth transition model

    Panel Smooth Transition Regression Models

    Get PDF
    We develop a non-dynamic panel smooth transition regression model with fixed individual effects. The model is useful for describing heterogenous panels, with regression coefficients that vary across individuals and over time. Heterogeneity is allowed for by assuming that these coefficients are continuous functions of an observable variable through a bounded function of this variable and fluctuate between a limited number (often two) of “extreme regimes”. The model can be viewed as a generalization of the threshold panel model of Hansen (1999). We extend the modelling strategy for univariate smooth transition regression models to the panel context. This comprises of model specification based on homogeneity tests, parameter estimation, and diagnostic checking, including tests for parameter constancy and no remaining nonlinearity. The new model is applied to describe firms’ investment decisions in the presence of capital market imperfections.financial constraints; heterogenous panel; investment; misspecification test; nonlinear modelling panel data; smooth transition models

    A Detail Based Method for Linear Full Reference Image Quality Prediction

    Full text link
    In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the copyright has been transferred to the IEE

    Fuel subsidies versus market power : is there a countervailing second-best optimum?

    Get PDF
    Fuel subsidies distort end-use prices below cost, resulting in overconsumption and huge environmental cost. On the other hand, the mark-up over cost due to the exercise of market power results in the social loss of consumer surplus. We open a new line of inquiry into the potential for a market-based solution from these two countervailing forces: can the two offsetting distortions conceivably achieve a second- best optimum? Relying on dynamic panel techniques and gasoline market data for 68 developing countries, we uncover an excessive second-best subsidy offset to market power mark-up on the order of 4.5. Our results indicate that the potential for policy failure strongly exceeds the potential for market failure in our model, and gasoline prices across our sample may not be aligned with vigorous anti-climate change policy

    Distributed Detection and Estimation in Wireless Sensor Networks

    Full text link
    In this article we consider the problems of distributed detection and estimation in wireless sensor networks. In the first part, we provide a general framework aimed to show how an efficient design of a sensor network requires a joint organization of in-network processing and communication. Then, we recall the basic features of consensus algorithm, which is a basic tool to reach globally optimal decisions through a distributed approach. The main part of the paper starts addressing the distributed estimation problem. We show first an entirely decentralized approach, where observations and estimations are performed without the intervention of a fusion center. Then, we consider the case where the estimation is performed at a fusion center, showing how to allocate quantization bits and transmit powers in the links between the nodes and the fusion center, in order to accommodate the requirement on the maximum estimation variance, under a constraint on the global transmit power. We extend the approach to the detection problem. Also in this case, we consider the distributed approach, where every node can achieve a globally optimal decision, and the case where the decision is taken at a central node. In the latter case, we show how to allocate coding bits and transmit power in order to maximize the detection probability, under constraints on the false alarm rate and the global transmit power. Then, we generalize consensus algorithms illustrating a distributed procedure that converges to the projection of the observation vector onto a signal subspace. We then address the issue of energy consumption in sensor networks, thus showing how to optimize the network topology in order to minimize the energy necessary to achieve a global consensus. Finally, we address the problem of matching the topology of the network to the graph describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R. Chellapa and S. Theodoridis, Eds., Elsevier, 201

    Nonparametric covariate-adjusted regression

    Full text link
    We consider nonparametric estimation of a regression curve when the data are observed with multiplicative distortion which depends on an observed confounding variable. We suggest several estimators, ranging from a relatively simple one that relies on restrictive assumptions usually made in the literature, to a sophisticated piecewise approach that involves reconstructing a smooth curve from an estimator of a constant multiple of its absolute value, and which can be applied in much more general scenarios. We show that, although our nonparametric estimators are constructed from predictors of the unobserved undistorted data, they have the same first order asymptotic properties as the standard estimators that could be computed if the undistorted data were available. We illustrate the good numerical performance of our methods on both simulated and real datasets.Comment: 32 pages, 4 figure

    Fast Genome-Wide QTL Association Mapping on Pedigree and Population Data

    Full text link
    Since most analysis software for genome-wide association studies (GWAS) currently exploit only unrelated individuals, there is a need for efficient applications that can handle general pedigree data or mixtures of both population and pedigree data. Even data sets thought to consist of only unrelated individuals may include cryptic relationships that can lead to false positives if not discovered and controlled for. In addition, family designs possess compelling advantages. They are better equipped to detect rare variants, control for population stratification, and facilitate the study of parent-of-origin effects. Pedigrees selected for extreme trait values often segregate a single gene with strong effect. Finally, many pedigrees are available as an important legacy from the era of linkage analysis. Unfortunately, pedigree likelihoods are notoriously hard to compute. In this paper we re-examine the computational bottlenecks and implement ultra-fast pedigree-based GWAS analysis. Kinship coefficients can either be based on explicitly provided pedigrees or automatically estimated from dense markers. Our strategy (a) works for random sample data, pedigree data, or a mix of both; (b) entails no loss of power; (c) allows for any number of covariate adjustments, including correction for population stratification; (d) allows for testing SNPs under additive, dominant, and recessive models; and (e) accommodates both univariate and multivariate quantitative traits. On a typical personal computer (6 CPU cores at 2.67 GHz), analyzing a univariate HDL (high-density lipoprotein) trait from the San Antonio Family Heart Study (935,392 SNPs on 1357 individuals in 124 pedigrees) takes less than 2 minutes and 1.5 GB of memory. Complete multivariate QTL analysis of the three time-points of the longitudinal HDL multivariate trait takes less than 5 minutes and 1.5 GB of memory

    A range unit root test

    Get PDF
    Since the seminal paper by Dickey and Fuller in 1979, unit-root tests have conditioned the standard approaches to analyse time series with strong serial dependence, the focus being placed in the detection of eventual unit roots in an autorregresive model fitted to the series. In this paper we propose a completely different method to test for the type of long-wave patterns observed not only in unit root time series but also in series following more complex data generating mechanisms. To this end, our testing device analyses the trend exhibit by the data, without imposing any constraint on the generating mechanism. We call our device the Range Unit Root (RUR) Test since it is constructed from running ranges of the series. These statistics allow a more general characterization of a strong serial dependence in the mean behavior, thus endowing our test with a number of desirable properties, among which its error-model-free asymptotic distribution, the invariance to nonlinear monotonic transformations of the series and the robustness to the presence of level shifts and additive outliers. In addition, the RUR test outperforms the power of standard unit root tests on near-unit-root stationary time series and is asymptotically immune to noise

    A RANGE UNIT ROOT TEST

    Get PDF
    Since the seminal paper by Dickey and Fuller in 1979, unit-root tests have conditioned the standard approaches to analyse time series with strong serial dependence, the focus being placed in the detection of eventual unit roots in an autorregresive model fitted to the series. In this paper we propose a completely different method to test for the type of“long-wave” patterns observed not only in unit root time series but also in series following more complex data generating mechanisms. To this end, our testing device analyses the trend exhibit by the data, without imposing any constraint on the generating mechanism. We call our device the Range Unit Root (RUR) Test since it is constructed from running ranges of the series. These statistics allow a more general characterization of a strong serial dependence in the mean behavior, thus endowing our test with a number of desirable properties, among which its error-model-free asymptotic distribution, the invariance to nonlinear monotonic transformations of the series and the robustness to the presence of level shifts and additive outliers. In addition, the RUR test outperforms the power of standard unit root tests on near-unit-root stationary time series and is asymptotically immune to noise.
    • 

    corecore