14,423 research outputs found

    Tail estimates for homogenization theorems in random media

    Full text link
    It is known that a random walk on Zd\Z^d among i.i.d. uniformly elliptic random bond conductances verifies a central limit theorem. It is also known that approximations of the covariance matrix can be obtained by considering periodic environments. Here we estimate the speed of convergence of this homogenization result. We obtain similar estimates for finite volume approximations of the effective conductance and of the lowest Dirichlet eigenvalue. A lower bound is also given for the variance of the Green function of a random walk in a random non-negative potential.Comment: 26 page

    Getting it Right When You Might Be Wrong: The Choice Between Price-Level and Inflation Targeting

    Get PDF
    Canada’s 2 percent inflation targeting program works pretty well – but could targeting the price level work even better, especially when inflation and the price level might not be perfectly observed?monetary policy, price-level targeting, inflation targeting, Bank of Canada

    Is My Assignment Ready To Be Submitted? Five Essential Questions to Ensure my Work Respects Intellectual Property

    Get PDF
    Outil d'aide à la décision développé par le Service de développement pédagogique, des programmes et de la recherche du Cégep Marie-Victorin et présenté au colloque 2017 de l'AQPC

    Understanding and Comparing Factor-Based Forecasts

    Get PDF
    Forecasting using "diffusion indices" has received a good deal of attention in recent years. The idea is to use the common factors estimated from a large panel of data to help forecast the series of interest. This paper assesses the extent to which the forecasts are influenced by (i) how the factors are estimated and/or (ii) how the forecasts are formulated. We find that for simple data-generating processes and when the dynamic structure of the data is known, no one method stands out to be systematically good or bad. All five methods considered have rather similar properties, though some methods are better in long-horizon forecasts, especially when the number of time series observations is small. However, when the dynamic structure is unknown and for more complex dynamics and error structures such as the ones encountered in practice, one method stands out to have smaller forecast errors. This method forecasts the series of interest directly, rather than the common and idiosyncratic components separately, and it leaves the dynamics of the factors unspecified. By imposing fewer constraints, and having to estimate a smaller number of auxiliary parameters, the method appears to be less vulnerable to misspecification, leading to improved forecasts.

    PRESERVING WATER QUALITY IN AGRICULTURE: BIOBED ROTATION TO VERTICAL

    Get PDF
    Up to 95% of the contamination of surface water by pesticides comes from on-farm point sources in connection with washing and preparation operations. This contamination is a growing concern for environment and human health. Because of their efficiency, their low cost and their friendly and simple use, Biobeds were recognized as the best tool to treat these pesticide effluents. Assuming a single passage of the effluent through the Biobed followed by release of the percolate, the research focused on the efficiency of the depuration after a single percolation. Accounting for unknown hazards such as metabolites and bound residues leads, however, local rules in Europe to enjoin a recycling of the effluent until full evaporation to prevent any release in the environment. Managed as such, we show that the Biobeds are waterlogged and no longer perform the elimination of the effluent. This induces large hazards of either direct volatilization or effluent release, and goes with increased costs, dissatisfaction or demotivation of the farmers, thus jeopardizing the development of this solution. Accounting for these new depuration conditions leads to a new Biobed paradigm, namely optimization of the transpiration of the water rather than optimization of the single percolation depuration, which leads to sharp changes in Biobed forms, content and management. Moreover, the corresponding new system shows larger performance, decreased space and maintenance requirements, and improved aesthetics. This is shown in the present study based on compared monitoring of the systems performance, hydrodynamics and substrate conditions during use.Resource /Energy Economics and Policy,

    Understanding and Comparing Factor-Based Forecasts

    Get PDF
    Forecasting using `diffusion indices' has received a good deal of attention in recent years. The idea is to use the common factors estimated from a large panel of data to help forecast the series of interest. This paper assesses the extent to which the forecasts are influenced by (i) how the factors are estimated, and/or (ii) how the forecasts are formulated. We find that for simple data generating processes and when the dynamic structure of the data is known, no one method stands out to be systematically good or bad. All five methods considered have rather similar properties, though some methods are better in long horizon forecasts, especially when the number of time series observations is small. However, when the dynamic structure is unknown and for more complex dynamics and error structures such as the ones encountered in practice, one method stands out to have smaller forecast errors. This method forecasts the series of interest directly, rather than the common and idiosyncratic components separately, and it leaves the dynamics of the factors unspecified. By imposing fewer constraints, and having to estimate a smaller number of auxiliary parameters, the method appears to be less vulnerable to misspecification, leading to improved forecasts.

    Are More Data Always Better for Factor Analysis?

    Get PDF
    Factors estimated from large macroeconomic panels are being used in an increasing number of applications. However, little is known about how the size and the composition of the data affect the factor estimates. In this paper, we question whether it is possible to use more series to extract the factors, and yet the resulting factors are less useful for forecasting, and the answer is yes. Such a problem tends to arise when the idiosyncratic errors are cross-correlated. It can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. In a real time forecasting exercise, we find that factors extracted from as few as 40 pre-screened series often yield satisfactory or even better results than using all 147 series. Weighting the data by their properties when constructing the factors also lead to improved forecasts. Our simulation analysis is unique in that special attention is paid to cross-correlated idiosyncratic errors, and we also allow the factors to have stronger loadings on some groups of series than others. It thus allows us to better understand the properties of the principal components estimator in empirical applications.

    R&D in Markets with Network Externalities

    Get PDF
    We study how network externalities affect research and development (R&D) investments by a non-cooperative duopoly that offers compatible products. We find that multiple R&D equilibria may arise when network externalities are non linear in the number of consumers. The lowest R&D equilibrium corresponds to the case where network externalities are absent. However, even in the presence of network externalities, firms may be trapped in a low-R&D equilibrium where output, and therefore consumers' valuation of the network size, is low. We derive the conditions under which the highest-R&D equilibrium Pareto dominates.Network Externalities

    Assessing changes in the monetary transmission mechanism: a VAR approach

    Get PDF
    Paper for a conference sponsored by the Federal Reserve Bank of New York entitled Financial Innovation and Monetary TransmissionEconomic conditions - United States ; Monetary policy

    DSGE Models in a Data-Rich Environment.

    Get PDF
    Standard practice for the estimation of dynamic stochastic general equilibrium (DSGE) models maintains the assumption that economic variables are properly measured by a single indicator, and that all relevant information for the estimation is summarized by a small number of data series. However, recent empirical research on factor models has shown that information contained in large data sets is relevant for the evolution of important macroeconomic series. This suggests that conventional model estimates and inference based on estimated DSGE models might be distorted. In this paper, we propose an empirical framework for the estimation of DSGE models that exploits the relevant information from a data-rich environment. This framework provides an interpretation of all information contained in a large data set, and in particular of the latent factors, through the lenses of a DSGE model. The estimation involves Markov-Chain Monte-Carlo (MCMC) methods. We apply this estimation approach to a state-of-the-art DSGE monetary model. We find evidence of imperfect measurement of the model's theoretical concepts, in particular for inflation. We show that exploiting more information is important for accurate estimation of the model's concepts and shocks, and that it implies different conclusions about key structural parameters and the sources of economic fluctuations.DSGE models ; Measurement error ; Large data sets ; Factor models ; Forecasting ; MCMC ; Bayesian estimation.
    • 

    corecore