16,757 research outputs found
Optimal Rules under Adjustment Cost and Infrequent Information
A large number of microeconomic decision variables such as investments, prices, inventories or employment are characterized by intermittent large adjustments. The behavior of those variables has been often modeled as following state-dependent rules. The optimality of such state-dependent rules depends crucially on the continuous observation of the relevant state, an assumption which is far from being fulfilled in practice. We propose an alternative model, where at least part of information about the relevant state variable is infrequent. We study several alternatives. We start with the special case where innovations are infrequent, but are readily observed. Only in this case are optimal rules state-dependent. We then explore the common case of infrequent and delayed information. It may arrive at deterministic times, like periodic macroeconomic statistics, or stochastically, when some events trigger announcements. Part of the relevant information may be continuously observed, while the other part is only observed infrequently. The resulting rules are time and state dependent, characterized by trigger and target points that are functions of the time spent since the last time of information arrival. We derive the conditions which characterize the optimal rules and provide numerical algorithms for each caseAdjustment costs, Infrequent information, Optimal rules
Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter
Primordial Black Holes (PBH) could be the cold dark matter of the universe.
They could have arisen from large (order one) curvature fluctuations produced
during inflation that reentered the horizon in the radiation era. At reentry,
these fluctuations source gravitational waves (GW) via second order anisotropic
stresses. These GW, together with those (possibly) sourced during inflation by
the same mechanism responsible for the large curvature fluctuations, constitute
a primordial stochastic GW background (SGWB) that unavoidably accompanies the
PBH formation. We study how the amplitude and the range of frequencies of this
signal depend on the statistics (Gaussian versus ) of the primordial
curvature fluctuations, and on the evolution of the PBH mass function due to
accretion and merging. We then compare this signal with the sensitivity of
present and future detectors, at PTA and LISA scales. We find that this SGWB
will help to probe, or strongly constrain, the early universe mechanism of PBH
production. The comparison between the peak mass of the PBH distribution and
the peak frequency of this SGWB will provide important information on the
merging and accretion evolution of the PBH mass distribution from their
formation to the present era. Different assumptions on the statistics and on
the PBH evolution also result in different amounts of CMB -distortions.
Therefore the above results can be complemented by the detection (or the
absence) of -distortions with an experiment such as PIXIE.Comment: 32 pages, 12 figure
Reconstruction Algorithms for Sums of Affine Powers
In this paper we study sums of powers of affine functions in (mostly) one
variable. Although quite simple, this model is a generalization of two
well-studied models: Waring decomposition and sparsest shift. For these three
models there are natural extensions to several variables, but this paper is
mostly focused on univariate polynomials. We present structural results which
compare the expressive power of the three models; and we propose algorithms
that find the smallest decomposition of f in the first model (sums of affine
powers) for an input polynomial f given in dense representation. We also begin
a study of the multivariate case. This work could be extended in several
directions. In particular, just as for Sparsest Shift and Waring decomposition,
one could consider extensions to "supersparse" polynomials and attempt a fuller
study of the multi-variate case. We also point out that the basic univariate
problem studied in the present paper is far from completely solved: our
algorithms all rely on some assumptions for the exponents in an optimal
decomposition, and some algorithms also rely on a distinctness assumption for
the shifts. It would be very interesting to weaken these assumptions, or even
to remove them entirely. Another related and poorly understood issue is that of
the bit size of the constants appearing in an optimal decomposition: is it
always polynomially related to the bit size of the input polynomial given in
dense representation?Comment: This version improves on several algorithmic result
Evaluación de tratamientos terciarios para la reutilización de agua proveniente de efluentes industriales
- …