394 research outputs found
The Microeconomics of Poverty Traps in Mexico
Macroeconomists, development scholars, and policy makers have long recognized the importance of poverty traps as a mayor cause of persistent inequality and a serious limitation to growth. A poverty trap may be defined as a threshold level below which individuals or households will not increase their well-being despite the conditions of the economy. While the importance of poverty traps is widely accepted, their microfoundations (the rationality) behind them are not very well understood. Under the Mexican setting, this paper contributes in two ways. First, we assume that income depends on the capital (both physical and human) that a household posses. Hence, if a household is poor and it is not able to accumulate capital it will remain poor (unless there is a sudden increase to the returns of its existing capital). Thus a poverty trap will be generated. Following Chavas (2004, 2005) we explicitly model the preferences, consumption, and the physical and human capital accumulation of Mexican households. We argue that the typical dynamic model with additive utilities and constant discount rates will not be able to capture poverty traps. The reason is that survival motives are involved (endogenous discounting is needed). Second, employing the same model, we test the impact of the Mexican government most important social policy program (Progresa-Oportunidades), in alleviating poverty traps. In the case of households with youngsters, this program can provide funds conditioned on kids attending school. This will somehow, force the participants to increase their human capital. A comparison between households in the programs versus non participants should shed some light in the effectiveness of the program and the sensitivity of persistent poverty to cash transfers.
Exhaustive enumeration unveils clustering and freezing in random 3-SAT
We study geometrical properties of the complete set of solutions of the
random 3-satisfiability problem. We show that even for moderate system sizes
the number of clusters corresponds surprisingly well with the theoretic
asymptotic prediction. We locate the freezing transition in the space of
solutions which has been conjectured to be relevant in explaining the onset of
computational hardness in random constraint satisfaction problems.Comment: 4 pages, 3 figure
Survey-propagation decimation through distributed local computations
We discuss the implementation of two distributed solvers of the random K-SAT
problem, based on some development of the recently introduced
survey-propagation (SP) algorithm. The first solver, called the "SP diffusion
algorithm", diffuses as dynamical information the maximum bias over the system,
so that variable nodes can decide to freeze in a self-organized way, each
variable making its decision on the basis of purely local information. The
second solver, called the "SP reinforcement algorithm", makes use of
time-dependent external forcing messages on each variable, which let the
variables get completely polarized in the direction of a solution at the end of
a single convergence. Both methods allow us to find a solution of the random
3-SAT problem in a range of parameters comparable with the best previously
described serialized solvers. The simulated time of convergence towards a
solution (if these solvers were implemented on a distributed device) grows as
log(N).Comment: 18 pages, 10 figure
Phase Transitions and Computational Difficulty in Random Constraint Satisfaction Problems
We review the understanding of the random constraint satisfaction problems,
focusing on the q-coloring of large random graphs, that has been achieved using
the cavity method of the physicists. We also discuss the properties of the
phase diagram in temperature, the connections with the glass transition
phenomenology in physics, and the related algorithmic issues.Comment: 10 pages, Proceedings of the International Workshop on
Statistical-Mechanical Informatics 2007, Kyoto (Japan) September 16-19, 200
On the cavity method for decimated random constraint satisfaction problems and the analysis of belief propagation guided decimation algorithms
We introduce a version of the cavity method for diluted mean-field spin
models that allows the computation of thermodynamic quantities similar to the
Franz-Parisi quenched potential in sparse random graph models. This method is
developed in the particular case of partially decimated random constraint
satisfaction problems. This allows to develop a theoretical understanding of a
class of algorithms for solving constraint satisfaction problems, in which
elementary degrees of freedom are sequentially assigned according to the
results of a message passing procedure (belief-propagation). We confront this
theoretical analysis to the results of extensive numerical simulations.Comment: 32 pages, 24 figure
Heterogeneity in Preferences towards Complexity
We analyze lottery-choice data in a way that separately estimates the effects of risk aversion and complexity aversion. Complexity is represented by the number of different outcomes in the lottery. A finite mixture random effects model is estimated which assumes that a proportion of the population are complexity-neutral. We find that around 33% of the population are complexity-neutral, around 50% complexity-averse, and the remaining 17% are complexity-loving. Subjects who do react to complexity appear to have a bias towards complexity aversion at the start of the experiment, but complexity aversion reduces with experience, to the extent that the average subject is (almost) complexity-neutral by the end of the experiment. Complexity aversion is found to increase with age and to be higher for non-UK students than for UK students. We also find some evidence that, when evaluating complex lotteries, subjects perceive probabilities in accordance with Prospective Reference Theory
Tropical cyclone genesis potential using a ventilated potential intensity
Genesis potential indices (GPIs) are widely used to understand the
climatology of tropical cyclones (TCs). However, the sign of projected future
changes depends on how they incorporate environmental moisture. Recent theory
combines potential intensity and mid-tropospheric moisture into a single
quantity called the ventilated potential intensity, which removes this
ambiguity. This work proposes a new GPI () that is proportional to the
product of the ventilated potential intensity and the absolute vorticity raised
to a power. This power is estimated to be approximately 5 by fitting observed
tropical cyclone best-track and ECMWF Reanalysis v5 (ERA5) data. Fitting the
model with separate exponents yields nearly identical values, indicating that
their product likely constitutes a single joint parameter. Likewise, results
are nearly identical for a Poisson model as for the power law. performs
comparably well to existing indices in reproducing the climatological
distribution of tropical cyclone genesis and its covariability with El
Ni\~no-Southern Oscillation, while only requiring a single fitting exponent.
When applied to Coupled Model Intercomparison Project Phase 6 (CMIP6)
projections, predicts that environments globally will become gradually
more favorable for TC genesis with warming, consistent with prior work based on
the normalized entropy deficit, though significant changes emerge only at
higher latitudes under relatively strong warming. helps resolve the
debate over the treatment of the moisture term and its implication for changes
in TC genesis favorability with warming, and its clearer physical
interpretation may offer a step forward towards a theory for genesis across
climate states
A simple model for predicting tropical cyclone minimum central pressure from intensity and size
Minimum central pressure () is an integrated measure of the tropical
cyclone wind field and is known to be a useful indicator of storm damage
potential. A simple model that predicts from routinely-estimated
quantities, including storm size, would be of great value. Here we present a
simple linear empirical model for predicting from maximum wind speed,
the radius of 34-knot winds (), storm-center latitude, and the
environmental pressure. An empirical model for the pressure deficit is first
developed that takes as predictors specific combinations of these quantities
that are derived directly from theory, based on gradient wind balance and a
modified-Rankine-type wind profile known to capture storm structure inside of
. Model coefficients are estimated using data from the southwestern
North Atlantic and eastern North Pacific from 2004--2022 using aircraft-based
estimates of , Extended Best Track data, and estimates of
environmental pressure from Global Forecast System (GFS) analyses. The model
has near-zero conditional bias even for low , explaining 94.4\% of the
variance. Performance is superior to a variety of other model formulations,
including a standard wind-pressure model that does not account for storm size
or latitude (89.4\% variance explained). Model performance is also strong when
applied to high-latitude data and data near coastlines. Finally, the model is
shown to perform comparably well in an operations-like setting based solely on
routinely-estimated variables, including the pressure of the outermost closed
isobar. Case study applications to five impactful historical storms are
discussed. Overall, the model offers a simple and fast prediction for
for practical use in operations and research
Superdeformed and Triaxial States in Ca 42
Shape parameters of a weakly deformed ground-state band and highly deformed slightly triaxial sideband in ^{42}Ca were determined from E2 matrix elements measured in the first low-energy Coulomb excitation experiment performed with AGATA. The picture of two coexisting structures is well reproduced by new state-of-the-art large-scale shell model and beyond-mean-field calculations. Experimental evidence for superdeformation of the band built on 0_{2}^{+} has been obtained and the role of triaxiality in the A∼40 mass region is discussed. Furthermore, the potential of Coulomb excitation as a tool to study superdeformation has been demonstrated for the first time
Physically-based Assessment of Hurricane Surge Threat under Climate Change
Storm surges are responsible for much of the damage and loss of life associated with landfalling hurricanes. Understanding how global warming will affect hurricane surges thus holds great interest. As general circulation models (GCMs) cannot simulate hurricane surges directly, we couple a GCM-driven hurricane model with hydrodynamic models to simulate large numbers of synthetic surge events under projected climates and assess surge threat, as an example, for New York City (NYC). Struck by many intense hurricanes in recorded history and prehistory, NYC is highly vulnerable to storm surges. We show that the change of storm climatology will probably increase the surge risk for NYC; results based on two GCMs show the distribution of surge levels shifting to higher values by a magnitude comparable to the projected sea-level rise (SLR). The combined effects of storm climatology change and a 1 m SLR may cause the present NYC 100-yr surge flooding to occur every 3–20 yr and the present 500-yr flooding to occur every 25–240 yr by the end of the century.United States. National Oceanic and Atmospheric Administration (Postdoctoral Fellowship Program)National Science Foundation (U.S.
- …
