5,112 research outputs found
Smoothening block rewards: How much should miners pay for mining pools?
The rewards a blockchain miner earns vary with time. Most of the time is
spent mining without receiving any rewards, and only occasionally the miner
wins a block and earns a reward. Mining pools smoothen the stochastic flow of
rewards, and in the ideal case, provide a steady flow of rewards over time.
Smooth block rewards allow miners to choose an optimal mining power growth
strategy that will result in a higher reward yield for a given investment. We
quantify the economic advantage for a given miner of having smooth rewards, and
use this to define a maximum percentage of rewards that a miner should be
willing to pay for the mining pool services.Comment: 15 pages, 1 figur
What is Fair Pay for Executives? An Information Theoretic Analysis of Wage Distributions
The high pay packages of U.S. CEOs have raised serious concerns about what
would constitute a fair pay.Comment: 16 page
Paradoxes, prices, and preferences : essays on decision making under risk and economic outcomes
This doctoral thesis contains three theoretical essays on the predictive power of leading descriptive decision theories and one empirical essay on the impact of stock market investorsâ probability distortion on future economic growth. Chapter 1 provides an extensive summary and motivation of all essays. The first essay (Chapter 2, co-authored with Maik Dierkes) shows that Cumulative Prospect Theory cannot explain both the St. Petersburg paradox and the common ratio version of the Allais paradox simultaneously if probability weighting and value functions are continuous. This result holds independently of parametrizations of the value and probability weighting function. Using both paradoxes as litmus tests, Cumulative Prospect Theory with the majority of popular weighting functions loses its superior predictive power over Expected Utility Theory. However, neo-additive weighting functions (which are discontinuous) do solve the Allais - St. Petersburg
conflict. The second essay in Chapter 3 (co-authored with Maik Dierkes) shows that Salience Theory explains both a low willingness to pay, for example 12.33), for playing the St. Petersburg lottery truncated at around 1 trillion) and reasonable preference reversal probabilities around 0.33 in Allaisâ common ratio paradox. Typical calibrations of other prominent theories (for example, Cumulative Prospect Theory or Expected Utility Theory) cannot solve both paradoxes simultaneously. With unbounded payoffs, however, Salience Theoryâs ranking-based probability distortion prevents such a solution - regardless of parametrizations. Furthermore, the probability distortion in Salience Theory can be significantly stronger than in Cumulative Prospect Theory, fully overriding the value functionâs risk attitude. The third essay in Chapter 4 (co-authored with Maik Dierkes) proves that subproportionality as a property of the probability weighting function alone does not automatically imply the common ratio effect in the framework of Cumulative Prospect Theory. Specifically, the issue occurs in the case of equal-mean lotteries because both risk-averse and risk-seeking behavior have to be predicted there. As a solution, we propose three simple properties of the probability weighting function which are sufficient to accommodate the empirical evidence of the common ratio effect for equal-mean lotteries for any S-shaped value function. These are (1) subproportionality, (2) indistinguishability of small probabilities, and (3) an intersection point with the diagonal lower than 0.5. While subproportionality and a fixed point lower than 0.5 are common assumptions in the literature, the property indistinguishability of small probabilities is introduced for the first time. The ratio of decision weights for infinitesimally small probabilities characterizes indistinguishability and is also an informative measure for the curvature of the probability weighting function at zero. The intuition behind indistinguishability is that, even though the ratio of probabilities stays constant at a moderate level, individuals tend to neglect this relative difference when probabilities get smaller. Finally, the fourth essay in Chapter 5 (co-authored with Maik Dierkes and Stephan Germer) links stock market investorsâ probability distortion to future economic growth. The empirical challenge is to quantify the optimality of todayâs decision making to test for its impact on future economic growth. Fortunately, risk preferences can be estimated from stock markets. Using monthly aggregate stock prices from 1926 to 2015, we estimate risk preferences via an asset pricing model with Cumulative Prospect Theory agents and distill a recently proposed probability distortion index. This index negatively predicts GDP growth in-sample and out-of-sample. Predictability is stronger and more reliable over longer horizons. Our results suggest that distorted asset prices may lead to significant welfare losses
A Problem Best Put Off Until Tomorrow
Effective Altruism has led a recent renaissance for utilitarian theory. However, it seems that despite its surge in popularity, Effective Altruism is still vulnerable to many of the critiques that plague utilitarianism. The most significant amongst these is the utility monster. I use Longtermsim, a mode of thinking that has evolved from Effective Altruism and prioritizes the far-future over the present in decision-making processes, as an example of how the unborn millions of the future might constitute a utility monster as a corporate mass. I investigate three main avenues of resolving the utility monster objection to Effective Altruism: reconsidering the use of expected value, adopting temporal discounting, and adopting average utilitarianism. I demonstrate that at best there are significant problems with these responses, and at worst, they completely fail to resolve the utility monster objection. I then conclude that if situations do exist in which the costs to the present do not intuitively justify the benefits to the far future, we must reject utilitarianism altogether
Neuronomics and Rationality
The assumption of rationality is both one of the most important and most controversial assumptions of modern economics. This article discusses what current experimental economic as well as neuroscience research tells us about the relationship between rationality and the mechanisms of human decision-making. The article explores the meaning of rationality, with a discussion of the distinction between traditional constructivist rationality and more ecological concepts of rationality. The article argues that ecological notions of rationality more accurately describe both human neural mechanisms as well as a wider variety of human behavior than do constructivist notions of rationality
Cognitive Models as Simulators: The Case of Moral Decision-Making
To achieve desirable performance, current AI systems often require huge
amounts of training data. This is especially problematic in domains where
collecting data is both expensive and time-consuming, e.g., where AI systems
require having numerous interactions with humans, collecting feedback from
them. In this work, we substantiate the idea of , which is to have AI systems interact with, and collect feedback
from, cognitive models instead of humans, thereby making their training process
both less costly and faster. Here, we leverage this idea in the context of
moral decision-making, by having reinforcement learning (RL) agents learn about
fairness through interacting with a cognitive model of the Ultimatum Game (UG),
a canonical task in behavioral and brain sciences for studying fairness.
Interestingly, these RL agents learn to rationally adapt their behavior
depending on the emotional state of their simulated UG responder. Our work
suggests that using cognitive models as simulators of humans is an effective
approach for training AI systems, presenting an important way for computational
cognitive science to make contributions to AI
Decision making in Engineering Projects
Even though risk management is a vital aspect of project management, the way that risk-based decisions are taken in projects is not well documented. Economic theory employs the concept of utility and assumes that decision makers are rational. Behavioural economics and prospect theory challenge this idea, making a number of specific claims about how decision-making behaviour deviates from rationality in practice. Based on a focus group discussion with project managers, this research highlights the importance of risk management in underpinning decision making and investigates the extent of rationality and applicability of prospect theory in an engineering project context. Prospect theoryâs claims of reference dependence and loss aversion are found to be important, but the claims of diminishing sensitivity and probability weighting appear to be less relevant
Intuition as a philosophical reflection at the time of COVID-19 pandemic
By January 2021, the number of Internet users amounted to 4.7 billion, while the social media audience hit the 4.2 billion mark. Two-thirds of the worldâs population use mobile phones daily. The average Internet user spends 42% of his time in the global network. These figures prove convincingly that the Internet has become an integral part of human life. However, manâs using the Internet involves increasingly the risk of cybercrime perpetrated against the user. The purpose of the research is to assess the potential of Big Data technologies to combat cyber fraud as a form of cybercrime. The study used the statistical data provided by the Prosecutor Generalâs Office of the Russian Federation and the publications in scientific journals. The methodological basis of the research is represented by a combination of general scientific and special scientific methods, with analysis, statistical method and systemic approach being the major tools. It was found in the course of the research that fraud constitutes the majority of crimes on the Internet. To counteract it, mobile operators and banks use anti-fraud techniques based on Big Data analysis. The paper provides an overview of services and programmes based on artificial intelligence and Big Data technologies, aimed at detecting and preventing telephone and internet fraud, used by law enforcement agencies in various countries. The paper concludes that Big Data has changed the vector of law enforcement activity from reactive to proactive
- âŠ