4,267 research outputs found
Fair advice
Millions of investors place their trust in financial advisors who may have incentives to give them bad advice. This may indicate that advisors behave more fairly than economic theory predicts. In this paper, we present results from a large-scale experiment studying advice-giving under conflicting interests. We use a binary dictator game as a baseline and transform it into a situation where the dictator gives advice that may or may not be followed. Our results show that people are averse to giving bad advice. When subjects are given the role of advisor, they behave less selfishly, even when the economic incentives and considerations remain the same as in the baseline dictator game.publishedVersio
Do People Recover from Algorithm Aversion? An Experimental Study of Algorithm Aversion over Time
Optimal decision making requires appropriate evaluation of advice. Recent literature reports that algorithm aversion reduces the effectiveness of predictive algorithms. However, it remains unclear how people recover from bad advice given by an otherwise good advisor. Previous work has focused on algorithm aversion at a single time point. We extend this work by examining successive decisions in a time series forecasting task using an online between-subjects experiment (N = 87). Our empirical results do not confirm algorithm aversion immediately after bad advice. The estimated effect suggests an increasing algorithm appreciation over time. Our work extends the current knowledge on algorithm aversion with insights into how weight on advice is adjusted over consecutive tasks. Since most forecasting tasks are not one-off decisions, this also has implications for practitioners
Misinformation making a disease outbreak worse: Outcomes compared for influenza, monkeypox and norovirus
Health misinformation can exacerbate infectious disease outbreaks. Especially pernicious advice could be classified as âfake newsâ: manufactured with no respect for accuracy and often integrated with emotive or conspiracy-framed narratives. We built an agent-based model that simulated separate but linked circulating contagious disease and sharing of health advice (classified as useful or harmful). Such advice has potential to influence human risk-taking behavior and therefore the risk of acquiring infection, especially as people are more likely in observed social networks to share bad advice. We test strategies proposed in the recent literature for countering misinformation. Reducing harmful advice from 50% to 40% of circulating information, or making at least 20% of the population unable to share or believe harmful advice, mitigated the influence of bad advice in the disease outbreak outcomes. How feasible it is to try to make people âimmuneâ to misinformation or control spread of harmful advice should be explored
An agent-based model about the effects of fake news on a norovirus outbreak
Concern about health misinformation is longstanding, especially on the Internet. Using agent-based models, we considered the effects of such misinformation on a norovirus outbreak, and some methods for countering the possible impacts of âgoodâ and âbadâ health advice. The work explicitly models spread of physical disease and information (both online and offline) as two separate but interacting processes. The models have multiple stochastic elements; repeat model runs were made to identify parameter values that most consistently produced the desired target baseline scenario. Next, parameters were found that most consistently led to a scenario when outbreak severity was clearly made worse by circulating poor quality disease prevention advice. Strategies to counter âfakeâ health news were tested. A 10% reduction in circulating bad advice or making at least 20% of people fully resistant to believing in and sharing bad health advice were effective thresholds to counteract the negative impacts of bad advice during a norovirus outbreak. How feasible it is to achieve these targets within communication networks (online and offline) should be explored
An agent-based model about the effects of fake news on a norovirus outbreak
Background; Concern about health misinformation is longstanding, especially on the Internet. Methods; Using agent-based models, we considered the effects of such misinformation on a norovirus outbreak, and some methods for countering the possible impacts of âgoodâ and âbadâ health advice. The work explicitly models spread of physical disease and information (both online and offline) as two separate but interacting processes. The models have multiple stochastic elements; repeat model runs were made to identify parameter values that most consistently produced the desired target baseline scenario. Next, parameters were found that most consistently led to a scenario when outbreak severity was clearly made worse by circulating poor quality disease prevention advice. Strategies to counter âfakeâ health news were tested. Results; Reducing bad advice to 30% of total information or making at least 30% of people fully resistant to believing in and sharing bad health advice were effective thresholds to counteract the negative impacts of bad advice during a norovirus outbreak. Conclusion: How feasible it is to achieve these targets within communication networks (online and offline) should be explored
Show and Tell
âShow donât tell.â Teachers preach these words. Style guides endorse them. And youâd be hard pressed to find any editor or law firm partner who hasnât offered them as feedback in the last year, month, week, maybe even day. Thereâs only one problem: âShow donât tellâ is bad advice. Or at least, it is incomplete advice
- âŠ