4,078 research outputs found

    Unlikely Estimates of the Ex Ante Real Interest Rate: Another Dismal Performance from the Dismal Science1

    Get PDF
    The ex ante real rate of interest is one of the most important concepts in economics and finance. Because the universally-used Fisher theory of interest requires positive ex ante real interest rates, empirical estimates of the ex ante real interest rate derived from the Fisher theory of interest should also be positive. Unfortunately, virtually all estimates of the ex ante real interest rate published in economic journals and textbooks or used in macroeconomic models and policy discussions for the past 35 years contain negative values for extended time periods and, thus, are theoretically flawed. Moreover, the procedures generally used to estimate ex ante real interest rates were shown to produce biased estimates of the ex ante real rate over 30 years ago. In this article, we document this puzzling chasm between the Fisherian theory that mandates positive ex ante real interest rates and the practice of macroeconomists who generate and use ex ante real interest rate estimates that violate this theory. We explore the reasons that this problem exists and assess some alternative approaches for estimating the ex ante real interest rate to determine whether they might resolve this problem.ex ante real interest rate, Fisher theory of interest, biased real interest rate estimates

    Group Size Effect on the Success of Wolves Hunting

    Get PDF
    Social foraging shows unexpected features such as the existence of a group size threshold to accomplish a successful hunt. Above this threshold, additional individuals do not increase the probability of capturing the prey. Recent direct observations of wolves in Yellowstone Park show that the group size threshold when hunting its most formidable prey, bison, is nearly three times greater than when hunting elk, a prey that is considerably less challenging to capture than bison. These observations provide empirical support to a computational particle model of group hunting which was previously shown to be effective in explaining why hunting success peaks at apparently small pack sizes when hunting elk. The model is based on considering two critical distances between wolves and prey: the minimal safe distance at which wolves stand from the prey, and the avoidance distance at which wolves move away from each other when they approach the prey. The minimal safe distance is longer when the prey is more dangerous to hunt. We show that the model explains effectively that the group size threshold is greater when the minimal safe distance is longer. Although both distances are longer when the prey is more dangerous, they contribute oppositely to the value of the group size threshold: the group size threshold is smaller when the avoidance distance is longer. This unexpected mechanism gives rise to a global increase of the group size threshold when considering bison with respect to elk, but other prey more dangerous than elk can lead to specific critical distances that can give rise to the same group size threshold. Our results show that the computational model can guide further research on group size effects, suggesting that more experimental observations should be obtained for other kind of prey as e.g. moose.Comment: 20 pages, 4 figures, 8 references. Other author's papers can be downloaded at http://www.denys-dutykh.com

    Optimizing Neural Networks with Gradient Lexicase Selection

    Full text link
    One potential drawback of using aggregated performance measurement in machine learning is that models may learn to accept higher errors on some training cases as compromises for lower errors on others, with the lower errors actually being instances of overfitting. This can lead to both stagnation at local optima and poor generalization. Lexicase selection is an uncompromising method developed in evolutionary computation, which selects models on the basis of sequences of individual training case errors instead of using aggregated metrics such as loss and accuracy. In this paper, we investigate how lexicase selection, in its general form, can be integrated into the context of deep learning to enhance generalization. We propose Gradient Lexicase Selection, an optimization framework that combines gradient descent and lexicase selection in an evolutionary fashion. Our experimental results demonstrate that the proposed method improves the generalization performance of various widely-used deep neural network architectures across three image classification benchmarks. Additionally, qualitative analysis suggests that our method assists networks in learning more diverse representations. Our source code is available on GitHub: https://github.com/ld-ing/gradient-lexicase.Comment: ICLR 202
    • 

    corecore