544 research outputs found

    Essays in Macroeconomics and Macroeconometrics

    Get PDF
    This thesis contributes to macroeconomics and macroeconometrics. Chapters 2-4 study the role of producer heterogeneity for business cycles and macroeconomic development. Chapters 5-6 provide inference for structural vector autoregressions. Chapter 2 examines the role of time to build for business cycles. We document that time to build is volatile and largest during recessions. In a model with producer heterogeneity and capital adjustment frictions, the longer time to build, the less frequently firms invest, and the less firm investment reflects firm productivity. Longer time to build thus worsens the allocation of capital across firm. In the calibrated model, one month longer time to build lowers GDP by 0.5%. Chapter 3 investigates the role of uncertainty fluctuations. We exploit highly disaggregated industry-level data to study the empirical importance of various transmission channels of uncertainty shocks. We provide testable implications for the interaction between various frictions and the job flow responses to uncertainty shocks. Empirically, uncertainty shocks lower job creation and raise job destruction in more than 80% of industries. In line with theory, these responses are significantly magnified by the severity of financial frictions. In contrast, we do not find supportive evidence for other transmission channels. Chapter 4 re-examines the importance of misallocation for macroeconomic development. We ask whether differences in micro-level factor productivities should be understood as a result of frictions in technology choice. We document that the bulk of all productivity differences is persistent and related to highly persistent differences in the capital-labor ratio. This suggests a cost of adjusting this ratio. In fact, a model with such friction can explain our findings. At the same time, the loss in productive efficiency from this friction is modest. Chapter 5 studies structural VAR models that impose equality and/or inequality restrictions on a single shock, e.g. a monetary policy shock. The paper proposes a computationally convenient algorithm to evaluate the smallest and largest feasible value of the structural impulse response. We further show under which conditions these values are directionally differentiable and propose delta-method inference for the set-identified structural impulse response. We apply our method to set-identify the effect of unconventional monetary policy shocks. In Chapter 6 we study models that impose restrictions on multiple shocks. The projection region is the collection of structural impulse responses compatible with the vectors of reduced-form parameters contained in a Wald ellipsoid. We show that the projection region has both frequentist coverage and robust Bayesian credibility. To address projection conservatism, we propose a feasible calibration algorithm, which achieves exact robust Bayesian credibility of the desired credibility level, and, additionally, exact frequentist coverage under differentiability assumptions

    Nonlinear exponential autoregressive time series models with conditional heteroskedastic errors with applications to economics and finance

    Get PDF
    The analysis of time series has long been the subject of interest in different fields. For decades time series were analysed with linear models, which have many advantages. Nevertheless, an issue which has been raised is whether there exist other models that can explain and forecast real data better than linear ones. In this thesis, new nonlinear time series models are suggested, which consist of a nonlinear conditional mean model, such as an ExpAR or an Extended ExpAR, and a nonlinear conditional variance model, such as an ARCH or a GARCH. Since new models are introduced, simulated series of the new models are presented, as it is important in order to see what characteristics real data which could be explained by them should have. In addition, the models are applied to various stationary and nonstationary economic and financial time series and are compared to the classic AR-ARCH and AR-GARCH models, in terms of fitting and forecasting. It is shown that, although it is difficult to beat the AR-ARCH and AR-GARCH models, the ExpAR and Extended ExpAR models and their special cases, combined with conditional heteroscedastic errors, can be useful tools in fitting, describing and forecasting nonlinear behaviour in financial and economic time series, and can provide some improvement in terms of both fitting and forecasting compared to the AR-ARCH and AR-GARCH models

    Dynamic quantile causal inference and forecasting

    Get PDF
    Standard impulse response functions measure the average effect of a shock on a response variable. However, different parts of the distribution of the response variable may react to the shock differently. The first chapter, “Quantile Structural Vector Autoregression”, introduces a framework to measure the dynamic causal effects of shocks on the entire distribution of response variables, not just on the mean. Various identification schemes are considered: shortrun and long-run restrictions, external instruments, and their combinations. Asymptotic distribution of the estimators is established. Simulations show our method is robust to heavy tails. Empirical applications reveal causal effects that cannot be captured by the standard approach. For example, the effect of oil price shock on GDP growth is statistically significant only in the left part of GDP growth distribution, so a spike in oil price may cause a recession, but there is no evidence that a drop in oil price may cause an expansion. Another application reveals that real activity shocks reduce stock market volatility. The second chapter, “Quantile Local Projections: Identification, Smooth Estimation, and Inference”, is devoted to an increasingly popular method to capture heterogeneity of impulse response functions, namely to local projections estimated by quantile regression. We study their identification by short-run restrictions, long-run restrictions, and external instruments. To overcome their excessive volatility, we introduce two novel estimators: Smooth Quantile Projections (SQP) and Smooth Quantile Projections with Instruments (SQPI). The SQPI inference is valid under weak instruments. We propose information criteria for optimal smoothing and apply the estimators to shocks in financial conditions and monetary policy. We demonstrate that financial conditions affect the entire distribution of future GDP growth and not just its lower part as previously thought. The third chapter, “Smooth Quantile Projections in a Data-Rich Environment”, modifies the estimator from the second chapter to construct distribution forecasting in a setting with potentially many variables. To this end we introduce a novel estimator, Smooth Quantile Projections with Lasso. The estimator involves two penalties, one controlling roughness of the forecasts over forecast horizons, while the other penalty selects the most informative set of predictors. We also introduce information criteria to guide the optimal choice of the two penalties and represent the problem as a linear program in standard form.I gratefully acknowledge funding from the Ministerio de Educación, Cultura y Deporte through its grant Formación de Profesorado Universitario (FPU).Programa de Doctorado en Economía por la Universidad Carlos III de MadridPresidente: José Olmo Badenas.- Secretario: Victor Emilio Troster.- Vocal: Mario Alloza Fruto

    Detailed modelling and optmization of crystallization process

    Get PDF
    Orientador: Rubens Maciel FilhoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia QuimicaResumo: O foco de estudo neste trabalho é a cristalização, processo bastante utilizado industrialmente, principalmente na obtenção de produtos de alto valor agregado nas indústrias farmacêuticas e de química fina. Embora seja um processo de clássica utilização, seus mecanismos, sua modelagem e o real controle de sua operação ainda requerem estudos. A tese apresenta discussões e desenvolvimentos na área de modelagem determinística detalhada do processo e sua otimização, tanto por métodos determinísticos quanto estocásticos. A modelagem é discutida detalhadamente e os desenvolvimentos presentes na literatura de métodos numéricos aplicáveis à solução do balanço de população, parte integrante da modelagem, são apresentados com enfoque nos processos de cristalização e nas principais vantagens e desvantagens. Estudos preliminares de melhoria do processo de cristalização em modo batelada operada por resfriamento indicam a necessidade de otimização da política operacional de resfriamento. Uma vez que o método determinístico de otimização de Programação Quadrática Sucessiva se apresenta ineficiente para resolução do problema de otimização, a utilização de Algoritmo Genético, um método estocástico de otimização bastante estabelecido na literatura, é avaliada, para a busca do ótimo global deste processo, em um estudo pioneiro na literatura de aplicação dessa técnica de otimização em processos de cristalização. Uma vez que o uso de Algoritmos Genéticos exige que se executem sucessivas corridas com diferentes valores para os seus parâmetros no intuito de se aumentar a probabilidade de alcance do ótimo global (ou suas cercanias), um procedimento original, geral e relativamente simples é desenvolvido e proposto para detecção do conjunto de parâmetros do algoritmo de influência significativa sobre a resposta de otimização. A metodologia proposta é aplicada a casos de estudo gerais, de complexidades diferentes e se mostra bastante útil nos estudos preliminares via Algoritmo Genético. O procedimento é então aplicado ao problema de otimização da trajetória de resfriamento a ser utilizada em um processo de cristalização em modo batelada. Os resultados obtidos na tese apontam para a dificuldade dos métodos determinísticos de otimização em lidar com problemas de alta dimensionalidade, levando a ótimos locais, enquanto os métodos evolucionários são capazes de se aproximar do ótimo global, sendo, no entanto, de lenta execução. O procedimento desenvolvido para detecção dos parâmetros significativos do Algoritmo Genético é uma contribuição relevante da tese e pode ser aplicado a qualquer problema de otimização, de qualquer complexidade e dimensionalidadeAbstract: This work is focused on crystallization, a process widely used in industry, especially for the production of high added-value particles in pharmaceutical and fine chemistry industries. Although it is a process of established utilization, its mechanisms, modeling and the real control of its operation still require research and study. This thesis presents considerations and developments on the detailed deterministic modeling area and the process optimization with both deterministic and stochastic methods. The modeling is discussed in detail and the literature developed numerical methods for the population balance solution, which is part of the modeling, are presented focusing on crystallization processes and on the main advantages and drawbacks. Preliminary studies on batch cooling crystallization processes improvement drive to the need of cooling operating policy optimization. Since the Sequential Quadratic Programming deterministic method of optimization is inefficient for the optimization problem, the use of Genetic Algorithm (GA), a stochastic optimization method well established in literature, is evaluated in the global optimum search for this process, in a pioneering literature study of GA application in crystallization processes. Since the GA requires that many runs, with different values for its parameters, are executed, in order to increase the probability of global optimum (or its neighborhood) achievement, an original, general and relatively simple procedure for the detection of the parameters set with significant influence on the optimization response is developed and proposed. The proposed methodology is applied to general case studies, with different complexities and is very useful in the preliminary studies via GA. The procedure is, then, applied to the cooling profile optimization problem in a batch cooling optimization process. The results of the study presented in this thesis indicate that the deterministic optimization methods do not deal well with high dimensionality problems, leading to achievement of local optima. The evolutionary methods are able to detect the region of the global optimum but, on the other hand, are not fast codes. The developed procedure for the significant GA parameters detection is a relevant contribution of the thesis and can be applied to any optimization problem (of any complexity and of any dimensionality)DoutoradoDesenvolvimento de Processos QuímicosDoutor em Engenharia Químic

    Advancing Process Control using Orthonormal Basis Functions

    Get PDF

    Advancing Process Control using Orthonormal Basis Functions

    Get PDF
    corecore