24,057 research outputs found
Resource Wealth, Innovation and Growth in the Global Economy
We analyze the relative growth performance of open economies in a two-country model where different endowments of labor and a natural resource generate asymmetric trade. A resource-rich economy trades resource-based intermediates for final manufacturing goods produced by a resource-poor economy. Productivity growth in both countries is driven by endogenous innovations. The effects of a sudden increase in the resource endowment depend crucially on the elasticity of substitution between resources and labor in interme- diates' production. Under substitution (complementarity), the resource boom generates higher (lower) resource income, lower (higher) employment in the resource-intensive sector, higher (lower) knowledge creation and faster (slower) growth in the resource-rich economy. The resource-poor economy adjusts to the shock by raising (reducing) the relative wage, and experiences a positive (negative) growth effect that is exclusively due to trade.Endogenous Growth, Endogenous Technological Change, Natural Resources, International Trade.
Growth on a Finite Planet: Resources, Technology and Population in the Long Run
We study the interactions between technological change, resource scarcity and population dynamics in a Schumpeterian model with endogenous fertility. There exists a pseudo- Malthusian equilibrium in which population is constant and income grows exponentially: the equilibrium population level is determined by resource scarcity but is independent of technology. The stability properties are driven by (i) the income reaction to increased resource scarcity and (ii) the fertility response to income dynamics. If labor and resources are substitutes in production, income and fertility dynamics are self-balancing and the pseudo-Malthusian equilibrium is the global attractor of the system. If labor and resources are complements, income and fertility dynamics are self-reinforcing and drive the economy towards either demographic explosion or human extinction. Introducing a minimum resource requirement, we obtain a second steady state implying constant population even under complementarity. The standard result of exponential population growth appears as a rather special case of our model.Endogenous Innovation, Resource Scarcity, Population Growth, Fertility Choices
Value at risk models in finance
The main objective of this paper is to survey and evaluate the performance of the most popular univariate VaR methodologies, paying particular attention to their underlying assumptions and to their logical flaws. In the process, we show that the Historical Simulation method and its variants can be considered as special cases of the CAViaR framework developed by Engle and Manganelli (1999). We also provide two original methodological contributions. The first one introduces the extreme value theory into the CAViaR model. The second one concerns the estimation of the expected shortfall (the expected loss, given that the return exceeded the VaR) using a regression technique. The performance of the models surveyed in the paper is evaluated using a Monte Carlo simulation. We generate data using GARCH processes with different distributions and compare the estimated quantiles to the true ones. The results show that CAViaR models perform best with heavy-tailed DGP. JEL Classification: C22, G22CAViaR, extreme value theory, Value at Risk
CAViaR: Conditional Value at Risk by Quantile Regression
Value at Risk has become the standard measure of market risk employed by financial institutions for both internal and regulatory purposes. Despite its conceptual simplicity, its measurement is a very challenging statistical problem and none of the methodologies developed so far give satisfactory solutions. Interpreting Value at Risk as a quantile of future portfolio values conditional on current information, we propose a new approach to quantile estimation which does not require any of the extreme assumptions invoked by existing methodologies (such as normality or i.i.d. returns). The Conditional Value at Risk or CAViaR model moves the focus of attention from the distribution of returns directly to the behavior of the quantile. We postulate a variety of dynamic processes for updating the quantile and use regression quantile estimation to determine the parameters of the updating process. Tests of model adequacy utilize the criterion that each period the probability of exceeding the VaR must be independent of all the past information. We use a differential evolutionary genetic algorithm to optimize an objective function which is non-differentiable and hence cannot be optimized using traditional algorithms. Applications to simulated and real data provide empirical support to our methodology and illustrate the ability of these algorithms to adapt to new risk environments.
Dynamically localized systems: entanglement exponential sensitivity and efficient quantum simulations
We study the pairwise entanglement present in a quantum computer that
simulates a dynamically localized system. We show that the concurrence is
exponentially sensitive to changes in the Hamiltonian of the simulated system.
Moreover, concurrence is exponentially sensitive to the ``logic'' position of
the qubits chosen. These sensitivities could be experimentally checked
efficiently by means of quantum simulations with less than ten qubits. We also
show that the feasibility of efficient quantum simulations is deeply connected
to the dynamical regime of the simulated system.Comment: 5 pages, 6 figure
unWISE tomography of Planck CMB lensing
MB lensing tomography, or the cross-correlation between CMB lensing maps and
large-scale structure tracers over a well-defined redshift range, has the
potential to map the amplitude and growth of structure over cosmic time,
provide some of the most stringent tests of gravity, and break important
degeneracies between cosmological parameters. In this work, we use the unWISE
galaxy catalog to provide three samples at median redshifts
and 1.5, fully spanning the Dark Energy dominated era, together with the most
recent Planck CMB lensing maps. We obtain a combined cross-correlation
significance over the range of scales . We
measure the redshift distribution of unWISE sources by a combination of
cross-matching with the COSMOS photometric catalog and cross-correlation with
BOSS galaxies and quasars and eBOSS quasars. We also show that magnification
bias must be included in our analysis and perform a number of null tests. In a
companion paper, we explore the derived cosmological parameters by modeling the
non-linearities and propagating the redshift distribution uncertainties.Comment: 51 pages, 22 figures. Comments welcome! Revisions reflect version
accepted by JCA
On the Virtual Element Method for Topology Optimization on polygonal meshes: a numerical study
It is well known that the solution of topology optimization problems may be
affected both by the geometric properties of the computational mesh, which can
steer the minimization process towards local (and non-physical) minima, and by
the accuracy of the method employed to discretize the underlying differential
problem, which may not be able to correctly capture the physics of the problem.
In light of the above remarks, in this paper we consider polygonal meshes and
employ the virtual element method (VEM) to solve two classes of paradigmatic
topology optimization problems, one governed by nearly-incompressible and
compressible linear elasticity and the other by Stokes equations. Several
numerical results show the virtues of our polygonal VEM based approach with
respect to more standard methods
Discontinuous Galerkin approximation of linear parabolic problems with dynamic boundary conditions
In this paper we propose and analyze a Discontinuous Galerkin method for a
linear parabolic problem with dynamic boundary conditions. We present the
formulation and prove stability and optimal a priori error estimates for the
fully discrete scheme. More precisely, using polynomials of degree on
meshes with granularity along with a backward Euler time-stepping scheme
with time-step , we prove that the fully-discrete solution is bounded
by the data and it converges, in a suitable (mesh-dependent) energy norm, to
the exact solution with optimal order . The sharpness of the
theoretical estimates are verified through several numerical experiments
- …