133 research outputs found

    Uniformly self-justified equilibria

    Get PDF
    We consider dynamic stochastic economies with heterogeneous agents and introduce the concept of uniformly self-justified equilibria (USJE)-temporary equilibria for which expectations satisfy the following rationality requirements: i) individuals' forecasting functions for the next period's endogenous variables are assumed to lie in a compact, finite-dimensional set of functions, and ii) the forecasts constitute the best uniform approximation to a selection of the equilibrium correspondence. We show that in contrast to rational expectations equilibria, USJE always exist, and we develop a simple algorithm to compute them. As an application, we discuss a stochastic overlapping generations exchange economy. We give an example where recursive (rational expectations) equilibria fail to exist and explain how to construct USJE for that example. In addition, we provide numerical examples to illustrate our computational method

    Distributed Learning with Biogeography-Based Optimization

    Get PDF
    We present hardware testing of an evolutionary algorithm known as biogeography-based optimization (BBO) and extend it to distributed learning. BBO is an evolutionary algorithm based on the theory of biogeography, which describes how nature geographically distributes organisms. We introduce a new BBO algorithm that does not use a centralized computer, and which we call distributed BBO. BBO and distributed BBO have been developed by mimicking nature to obtain an algorithm that optimizes solutions for different situations and problems. We use fourteen common benchmark functions to obtain results from BBO and distributed BBO, and we also use both algorithms to optimize robot control algorithms. We present not only simulation results, but also experimental results using BBO to optimize the control algorithms of mobile robots. The results show that centralized BBO generally gives better optimization results and would generally be a better choice than any of the newly proposed forms of distributed BBO. However, distributed BBO allows the user to find a less optimal solution to a problem while avoiding the need for centralized, coordinated control

    The climate in climate economics

    Full text link
    To analyze climate change mitigation strategies, economists rely on simplified climate models - climate emulators. We propose a generic and transparent calibration and evaluation strategy for these climate emulators that is based on Coupled Model Intercomparison Project, Phase 5 (CMIP5). We demonstrate that the appropriate choice of the free model parameters can be of key relevance for the predicted social cost of carbon. We propose to use four different test cases: two tests to separately calibrate and evaluate the carbon cycle and temperature response, a test to quantify the transient climate response, and a final test to evaluate the performance for scenarios close to those arising from economic models. We re-calibrate the climate part of the widely used DICE-2016: the multi-model mean as well as extreme, but still permissible climate sensitivities and carbon cycle responses. We demonstrate that the functional form of the climate emulator of the DICE-2016 model is fit for purpose, despite its simplicity, but its carbon cycle and temperature equations are miscalibrated. We examine the importance of the calibration for the social cost of carbon in the context of a partial equilibrium setting where interest rates are exogenous, as well as the simple general equilibrium setting from DICE-2016. We find that the model uncertainty from different consistent calibrations of the climate system can change the social cost of carbon by a factor of four if one assumes a quadratic damage function. When calibrated to the multi-model mean, our model predicts similar values for the social cost of carbon as the original DICE-2016, but with a strongly reduced sensitivity to the discount rate and about one degree less long-term warming. The social cost of carbon in DICE-2016 is oversensitive to the discount rate, leading to extreme comparative statics responses to changes in preferences

    Metrology of Rydberg states of the hydrogen atom

    Full text link
    We present a method to precisly measure the frequencies of transitions to high-nn Rydberg states of the hydrogen atom which are not subject to uncontrolled systematic shifts caused by stray electric fields. The method consists in recording Stark spectra of the field-insensitive k=0k=0 Stark states and the field-sensitive k=±2k=\pm2 Stark states, which are used to calibrate the electric field strength. We illustrate this method with measurements of transitions from the 2s(f=0 and 1)2\,\text{s}(f=0\text{ and } 1) hyperfine levels in the presence of intentionally applied electric fields with strengths in the range between 0.40.4 and 1.61.6\,Vcm1^{-1}. The slightly field-dependent k=0k=0 level energies are corrected with a precisely calculated shift to obtain the corresponding Bohr energies (cRH/n2)\left(-cR_{\mathrm{H}}/n^2\right). The energy difference between n=20n=20 and n=24n=24 obtained with our method agrees with Bohr's formula within the 1010\,kHz experimental uncertainty. We also determined the hyperfine splitting of the 2s2\,\text{s} state by taking the difference between transition frequencies from the 2s(f=0 and 1)2\,\text{s}(f=0 \text{ and }1) levels to the n=20,k=0n=20,k=0 Stark states. Our results demonstrate the possibility of carrying out precision measurements in high-nn hydrogenic quantum states

    Machine learning for dynamic incentive problems

    Get PDF
    We propose a generic method for solving infinite-horizon, discrete-time dynamic incentive problems with hidden states. We first combine set-valued dynamic programming techniques with Bayesian Gaussian mixture models to determine irregularly shaped equilibrium value correspondences. Second, we generate training data from those pre-computed feasible sets to recursively solve the dynamic incentive problem by a massively parallelized Gaussian process machine learning algorithm. This combination enables us to analyze models of a complexity that was previously considered to be intractable. To demonstrate the broad applicability of our framework, we compute solutions for models of repeated agency with history dependence, many types, and varying preferences

    Modeling temperature-dependent population dynamics in the excited state of the nitrogen-vacancy center in diamond

    Full text link
    The nitrogen-vacancy (NV) center in diamond is well known in quantum metrology and quantum information for its favorable spin and optical properties, which span a wide temperature range from near zero to over 600 K. Despite its prominence, the NV center's photo-physics is incompletely understood, especially at intermediate temperatures between 10-100 K where phonons become activated. In this work, we present a rate model able to describe the cross-over from the low-temperature to the high-temperature regime. Key to the model is a phonon-driven hopping between the two orbital branches in the excited state (ES), which accelerates spin relaxation via an interplay with the ES spin precession. We extend our model to include magnetic and electric fields as well as crystal strain, allowing us to simulate the population dynamics over a wide range of experimental conditions. Our model recovers existing descriptions for the low- and high-temperature limits, and successfully explains various sets of literature data. Further, the model allows us to predict experimental observables, in particular the photoluminescence (PL) emission rate, spin contrast, and spin initialization fidelity relevant for quantum applications. Lastly, our model allows probing the electron-phonon interaction of the NV center and reveals a gap between the current understanding and recent experimental findings

    Imaging-assisted single-photon Doppler-free laser spectroscopy and the ionization energy of metastable triplet helium

    Full text link
    Skimmed supersonic beams provide intense, cold, collision-free samples of atoms and molecules are one of the most widely used tools in atomic and molecular laser spectroscopy. High-resolution optical spectra are typically recorded in a perpendicular arrangement of laser and supersonic beams to minimize Doppler broadening. Typical Doppler widths are nevertheless limited to tens of MHz by the residual transverse-velocity distribution in the gas-expansion cones. We present an imaging method to overcome this limitation which exploits the correlation between the positions of the atoms and molecules in the supersonic expansion and their transverse velocities - and thus their Doppler shifts. With the example of spectra of (1\mathrm{s})(n\mathrm{p})\,^3\mathrm{P}_{0-2}\leftarrow (1\mathrm{s})(2\mathrm{s})\,^3\mathrm{S}_1 transitions to high Rydberg states of metastable triplet He, we demonstrate the suppression of the residual Doppler broadening and a reduction of the full linewidths at half maximum to only about 1 MHz in the UV. Using a retro-reflection arrangement for the laser beam and a cross-correlation method, we determine Doppler-free spectra without any signal loss from the selection, by imaging, of atoms within ultranarrow transverse-velocity classes. As an illustration, we determine the ionization energy of triplet metastable He and confirm the significant discrepancy between recent experimental (Clausen et al., Phys. Rev. Lett. 127 093001 (2021)) and high-level theoretical (Patk\'os et al., Phys. Rev. A 103 042809 (2021)) values of this quantity

    Temperature dependence of photoluminescence intensity and spin contrast in nitrogen-vacancy centers

    Full text link
    We report on measurements of the photoluminescence (PL) properties of single nitrogen-vacancy (NV) centers in diamond at temperatures between 4-300 K. We observe a strong reduction of the PL intensity and spin contrast between ca. 10-100 K that recovers to high levels below and above. Further, we find a rich dependence on magnetic bias field and crystal strain. We develop a comprehensive model based on spin mixing and orbital hopping in the electronic excited state that quantitatively explains the observations. Beyond a more complete understanding of the excited-state dynamics, our work provides a novel approach for probing electron-phonon interactions and a predictive tool for optimizing experimental conditions for quantum applications.Comment: Companion paper: arXiv:2304.02521 | Model: https://github.com/sernstETH/nvratemode

    Statistical Mechanics of the Chinese Restaurant Process: lack of self-averaging, anomalous finite-size effects and condensation

    Full text link
    The Pitman-Yor, or Chinese Restaurant Process, is a stochastic process that generates distributions following a power-law with exponents lower than two, as found in a numerous physical, biological, technological and social systems. We discuss its rich behavior with the tools and viewpoint of statistical mechanics. We show that this process invariably gives rise to a condensation, i.e. a distribution dominated by a finite number of classes. We also evaluate thoroughly the finite-size effects, finding that the lack of stationary state and self-averaging of the process creates realization-dependent cutoffs and behavior of the distributions with no equivalent in other statistical mechanical models.Comment: (5pages, 1 figure

    Methods comparison for detecting trends in herbicide monitoring time-series in streams

    Get PDF
    An inadvertent consequence of pesticide use is aquatic pesticide pollution, which has prompted the implementation of mitigation measures in many countries. Water quality monitoring programs are an important tool to evaluate the efficacy of these mitigation measures. However, large interannual variability of pesticide losses makes it challenging to detect significant improvements in water quality and to attribute these improvements to the application of specific mitigation measures. Thus, there is a gap in the literature that informs researchers and authorities regarding the number of years of aquatic pesticide monitoring or the effect size (e.g., loss reduction) that is required to detect significant trends in water quality. Our research addresses this issue by combining two exceptional empirical data sets with modelling to explore the relationships between the achieved pesticide reduction levels due to mitigation measures and the length of the observation period for establishing statistically significant trends. Our study includes both a large (Rhine at Basel, ∼36,300 km2) and small catchment (Eschibach, 1.2 km2), which represent spatial scales at either end of the spectrum that would be realistic for monitoring programs designed to assess water quality. Our results highlight several requirements in a monitoring program to allow for trend detection. Firstly, sufficient baseline monitoring is required before implementing mitigation measures. Secondly, the availability of pesticide use data helps account for the interannual variability and temporal trends, but such data are usually lacking. Finally, the timing and magnitude of hydrological events relative to pesticide application can obscure the observable effects of mitigation measures (especially in small catchments). Our results indicate that a strong reduction (i.e., 70–90 %) is needed to detect a change within 10 years of monitoring data. The trade-off in applying a more sensitive method for change detection is that it may be more prone to false-positives. Our results suggest that it is important to consider the trade-off between the sensitivity of trend detection and the risk of false positives when selecting an appropriate method and that applying more than one method can provide more confidence in trend detection
    corecore