5,201 research outputs found
Optimal welfare-to-work programs
A Welfare-to-Work (WTW) program is a mix of government expenditures on “passive” (unemployment insurance, social assistance) and “active” (job search monitoring, training, wage taxes/subsidies) labor market policies targeted to the unemployed. This paper provides a dynamic principal-agent framework suitable for analyzing the optimal sequence and duration of the different WTW policies, and the dynamic pattern of payments along the unemployment spell and of taxes/subsidies upon re-employment. First, we show that the optimal program endogenously generates an absorbing policy of last resort (that we call “social assistance”) characterized by a constant lifetime payment and no active participation by the agent. Second, human capital depreciation is a necessary condition for policy transitions to be part of an optimal WTW program. Whenever training is not optimally provided, we show that the typical sequence of policies is quite simple: the program starts with standard unemployment insurance, then switches into monitored search and, finally, into social assistance. Only the presence of an optimal training activity may generate richer transition patterns. Third, the optimal benefits are generally decreasing or constant during unemployment, but they must increase after a successful spell of training. In a calibration exercise based on the U.S. labor market and on the evidence from several evaluation studies, we use our model to analyze quantitatively the features of the optimal WTW program for the U.S. economy. With respect to the existing U.S. system, the optimal WTW scheme delivers sizeable welfare gains, by providing more insurance to skilled workers and more incentives to unskilled workers.
Nonlinear VAR: Some Theory and an Application to US GNP and Unemployment
A generalization of the endogenous threshold model is developed by extending this class to a multivariate framework and to cases where the feedback acts at multiple lags. The feedback is specified, following Beaudry and Koop, by a variable which measures the depth of recessions. We give conditions for the ergodicity of the model and prove strong consistency of the maximum likelihood estimator, although the objective function is discontinuous in the threshold parameter. The model is applied to a bivariate VAR of output growth and changes in the unemployment rate for the US economy. The nonlinearity is found to be statistically significant only in the unemployment equation and it transmits to GNP through the cross-correlation between the series. We also find that, owing to the nonlinear structure, shocks hitting the economy in downturns have lower persistence than those occurring during expansions. Since this dampening effect is stronger for negative than for positive shocks, the feedback from recessions is found to contribute positively to the long-run growth of the economy and we estimate this contribution to be about 1/6 of the total growth over the sample period. We interpret this result as an empirical validation of those economic theories that model recessions as cleansing times. Finally, we suggest that the state-dependence in persistence is a possible key to interpret the divergence in the measures of persistence existing in the literature.mathematical analysis, stochastic models, United States, unemployment, production, econometric models, estimation of parameters
Education Decisions, Equilibrium Policies and Wages Dispersion
Education, Inequality, Equilibrium, Policy
Vintage capital as an origin of inequalities
Does capital-embodied technological change play an important role in shaping labor market inequalities? This paper addresses the question in a model with vintage capital and search / matching frictions where costly capital investment leads to large heterogeneity in productivity among vacancies in equilibrium. The paper first demonstrates analytically how both technology growth and institutional variables affect equilibrium wage inequality, income shares and unemployment. Next, it applies the model to a quantitative evaluation of capital as an origin of wage inequality: at the current rate of embodied productivity growth a 10-year vintage differential in capital translates into a 6% wage gap. The model also allows a U.S. – continental Europe comparison: an embodied technological acceleration interacted with different labor market institutions can explain a significant part of the differential rise in unemployment and capital share and some of the differential dynamics in wage inequality.
The Demographic Transition in Closed and Open Economies: A Tale of Two Regions
This paper constructs a general equilibrium overlapping generation model to evaluate quantitatively how demographic transition (falling mortality and fertility rates) affects aggregate variables (wages, interest rate, output), and inter-generational welfare in closed and open economies. We perform this analysis for two economies calibrated to resemble the North (US and Europe) and Latin America. Our simulations suggest that the demographic transition could have generated income per capita growth up to 0. 5% per year in excess of steady-state growth in the past 50 years in Latin America and 0. 3% in the North.
A Quantitative Study of the Replacement Problem in Frictional Economies
The question of how technological change affects labor markets is a classical one in macroeconomics. A standard framework for addressing this question is the matching model with vintage capital and exogenous technical progress. Within this framework, it has been argued that the impact of technological change on labor market outcomes differs according to the mechanism through which the new technology enters the economy. In particular, it matters whether: (1) new capital replaces old capital by destroying the job and displacing the worker (Schumpeterian creative-destruction) or old capital can be "upgraded" to the frontier technology (Solowian upgrading); (2) firms make the technology adoption decision unilaterally (hold-up), or the investment decision is surplus-maximizing (efficient investment). Our main finding is that for quantitatively reasonable parameter values the specific details of the model for how technology is introduced and who decides on investments do not matter for the equilibrium outcomes of our main variables of interest: unemployment, wage inequality, and labor share. The intuition for this "equivalence result" is that these models will yield significantly different implications only if the matching process is very costly and time-consuming, but our calibration shows that this meeting friction is minorinvestment-specific technical change, directed technical change, skill premium
Gyrification, cortical and subcortical morphometry in neurofibromatosis type 1: an uneven profile of developmental abnormalities.
Background: Neurofibromatosis type 1 (NF1) is a monogenic disorder associated with cognitive impairments. In order to understand how mutations in the NF1 gene impact brain structure it is essential to characterize in detail the brain structural abnormalities in patients with NF1. Previous studies have reported contradictory findings and have focused only on volumetric measurements. Here, we investigated the volumes of subcortical structures and the composite dimensions of the cortex through analysis of cortical volume, cortical thickness, cortical surface area and gyrification. Methods: We studied 14 children with NF1 and 14 typically developing children matched for age, gender, IQ and right/left-handedness. Regional subcortical volumes and cortical gyral measurements were obtained using the FreeSurfer software. Between-group differences were evaluated while controlling for the increase in total intracranial volume observed in NF1. Results: Subcortical analysis revealed disproportionately larger thalami, right caudate and middle corpus callosum in patients with NF1. Cortical analyses on volume, thickness and surface area were however not indicative of significant alterations in patients. Interestingly, patients with NF1 had significantly lower gyrification indices than typically developing children primarily in the frontal and temporal lobes, but also affecting the insula, cingulate cortex, parietal and occipital regions. Conclusions: The neuroanatomic abnormalities observed were localized to specific brain regions, indicating that particular areas might constitute selective targets for NF1 gene mutations. Furthermore, the lower gyrification indices were accompanied by a disproportionate increase in brain size without the corresponding increase in folding in patients with NF1. Taken together these findings suggest that specific neurodevelopmental processes, such as gyrification, are more vulnerable to NF1 dysfunction than others. The identified changes in brain organization are consistent with the patterns of cognitive dysfunction in the NF1 phenotype. © 2013 Violante et al
Consistent ranking of multivariate volatility models
A large number of parameterizations have been proposed to model conditional variance dynamics in a multivariate framework. This paper examines the ranking of multivariate volatility models in terms of their ability to forecast out-of-sample conditional variance matrices. We investigate how sensitive the ranking is to alternative statistical loss functions which evaluate the distance between the true covariance matrix and its forecast. The evaluation of multivariate volatility models requires the use of a proxy for the unobservable volatility matrix which may shift the ranking of the models. Therefore, to preserve this ranking conditions with respect to the choice of the loss function have to be discussed. To do this, we extend the conditions defined in Hansen and Lunde (2006) to the multivariate framework. By invoking norm equivalence we are able to extend the class of loss functions that preserve the true ranking. In a simulation study, we sample data from a continuous time multivariate diffusion process to illustrate the sensitivity of the ranking to different choices of the loss functions and to the quality of the proxy. An application to three foreign exchange rates, where we compare the forecasting performance of 16 multivariate GARCH specifications, is provided.volatility, multivariate GARCH, matrix norm and loss function, norm equivalence
Joint-search theory: new opportunities and new frictions
Search theory routinely assumes that decisions about the acceptance/rejection of job offers (and, hence, about labor market movements between jobs or across employment states) are made by individuals acting in isolation. In reality, the vast majority of workers are somewhat tied to their partners - in couples and families - and decisions are made jointly. This paper studies, from a theoretical viewpoint, the joint job-search and location problem of a household formed by a couple (e.g., husband and wife) who perfectly pools income. The objective of the exercise, very much in the spirit of standard search theory, is to characterize the reservation wage behavior of the couple and compare it to the single-agent search model in order to understand the ramifications of partnerships for individual labor market outcomes and wage dynamics. We focus on two main cases. First, when couples are risk averse and pool income, joint search yields new opportunities - similar to on-the-job search - relative to the single-agent search. Second, when the two spouses in a couple face job offers from multiple locations and a cost of living apart, joint search features new frictions and can lead to significantly worse outcomes than single-agent search.Search theory ; Unemployment ; Wages
- …