171,109 research outputs found
Predictions as statements and decisions
Project web site
Nothing But the Truth? Experiments on Adversarial Competition, Expert Testimony, and Decision Making
Many scholars debate whether a competition between experts in legal, political, or economic contexts elicits truthful information and, in turn, enables people to make informed decisions. Thus, we analyze experimentally the conditions under which competition between experts induces the experts to make truthful statements and enables jurors listening to these statements to improve their decisions. Our results demonstrate that, contrary to game theoretic predictions and contrary to critics of our adversarial legal system, competition induces enough truth telling to allow jurors to improve their decisions. Then, when we impose additional institutions (such as penalties for lying or the threat of verification) on the competing experts, we observe even larger improvements in the experts\u27 propensity to tell the truth and in jurors\u27 decisions. We find similar improvements when the competing experts are permitted to exchange reasons for why their statements may be correct
Financial Statement Comparability and Information Risk
This study extends the prior research on comparability benefits and examines the relation between financial statement comparability and information risk. I expect that higher accounting comparability of financial statements enhances the utility of accounting data for investors by helping them to identify the similarities and differences between economic events and thus decrease the information risk of firms with higher accounting comparability. Consistent with my predictions, I find that firms with higher financial statement comparability have lower information risk and this effect is more pronounced for firms with high earnings volatility. The findings suggest that accounting comparability facilitates the users of financial statements to better understand firmsā accounting data and therefore increases the usefulness of financial statement information and helps investors make better judgement on firmsā performances and, as a result, make better investment decisions
Predicting Audit Opinion by a new Metaheuristic Algorithm: Water Cycle Algorithm
An auditor evaluates if financial statements which the firms issue in public, present fairly and are free from material misstatement. The audit report is a written letter containing independent verification of the quality of financial statements used for making economic decisions. Hence, the issuance of such a report can lead to the transmission news and information about the firm and to enhance the degree of confidence in the financial statements. This study predicts audit opinion of the firms listed in Tehran Stock Exchange during 2018-2020 by a new metaheuristic algorithm named Water Cycle Algorithm (WCA) and compares its results with one of the most popular methods called logistic regression (LG). 24 variables were extracted from the literature and used for this prediction. 4 evaluating criteria were used to compare the predictions of two methods. According to findings, the superiority of the criteria in the WCA was confirmed in comparison to LG. Since WCA was more appropriate, users of financial reports can use it to predict the type of audit opinion in the unaudited interim financial statements, and also, auditors can use it while evaluating and accepting clients and achieving an acceptable level of audit risk, as a quality control tool
Competition in the Courtroom: When Does Expert Testimony Improve Jurorsā Decisions?
Many scholars lament the increasing complexity of jury trials and question whether the testimony of competing experts helps unsophisticated jurors to make informed decisions. In this article, we analyze experimentally the effects that the testimony of competing experts has on (1) sophisticated versus unsophisticated subjects\u27 decisions and (2) subjects\u27 deci- sions on difficult versus easy problems. Our results demonstrate that competing expert testimony, by itself, does not help unsophisticated subjects to behave as though they are sophisticated, nor does it help subjects make comparable decisions on difficult and easy problems. When we impose additional institutions (such as penalties for lying or a threat of verification) on the competing experts, we observe such dramatic improvements in unso- phisticated subjects\u27 decisions that the gap between their decisions and those of sophisti- cated subjects closes. We find similar results when the competing experts exchange reasons for why their statements may be correct. However, additional institutions and the experts\u27 exchange of reasons are less effective at closing the gap between subjects\u27 decisions on difficult versus easy problems
On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection
Humans are the final decision makers in critical tasks that involve ethical
and legal concerns, ranging from recidivism prediction, to medical diagnosis,
to fighting against fake news. Although machine learning models can sometimes
achieve impressive performance in these tasks, these tasks are not amenable to
full automation. To realize the potential of machine learning for improving
human decisions, it is important to understand how assistance from machine
learning models affects human performance and human agency.
In this paper, we use deception detection as a testbed and investigate how we
can harness explanations and predictions of machine learning models to improve
human performance while retaining human agency. We propose a spectrum between
full human agency and full automation, and develop varying levels of machine
assistance along the spectrum that gradually increase the influence of machine
predictions. We find that without showing predicted labels, explanations alone
slightly improve human performance in the end task. In comparison, human
performance is greatly improved by showing predicted labels (>20% relative
improvement) and can be further improved by explicitly suggesting strong
machine performance. Interestingly, when predicted labels are shown,
explanations of machine predictions induce a similar level of accuracy as an
explicit statement of strong machine performance. Our results demonstrate a
tradeoff between human performance and human agency and show that explanations
of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo
available at https://deception.machineintheloop.co
Stated belief and play in normal form games
Using data on one-shot games, we investigate the assumption that players respond to
underlying expectations about their opponentļæ½s behavior. In our laboratory experiments, subjects
play a set of 14 two-person 3x3 games, and state first order beliefs about their opponentļæ½s
behavior. The sets of responses in the two tasks are largely inconsistent. Rather, we find
evidence that the subjects perceive the games differently when they (i) choose actions, and (ii)
state beliefs ļæ½ they appear to pay more attention to the opponentļæ½s incentives when they state
beliefs than when they play the games. On average, they fail to best respond to their own stated
beliefs in almost half of the games. The inconsistency is confirmed by estimates of a unified
statistical model that jointly uses the actions and the belief statements. There, we can control for
noise, and formulate a statistical test that rejects consistency. Effects of the belief elicitation
procedure on subsequent actions are mostly insignificant
Bringing the Four-Eyes-Principle to the Lab
The āFour-Eyes-Principleā is considered as one of the most potent measures against corruption although it lacks both theoretical and empirical justification. We show in a
laboratory experiment using a standard corruption game that introducing the 4EP increases corrupt behaviour, casting doubt on its usefulness as a general recommendation. Combining data on final choices with observations on the decision making processes in teams, including a content analysis of exchanged messages, provides insights into the dynamics of team decision making and shows that the individual profit maximizing motive dominates group
decision making and crowds out altruistic arguments
Goal-based structuring in a recommender systems
Recommender systems help people to find information that is interesting to them. However, current recommendation techniques only address the user's short-term and long-term interests, not their immediate interests. This paper describes a method to structure information (with or without using recommendations) taking into account the users' immediate interests: a goal-based structuring method. Goal-based structuring is based on the fact that people experience certain gratifications from using information, which should match with their goals. An experiment using an electronic TV guide shows that structuring information using a goal-based structure makes it easier for users to find interesting information, especially if the goals are used explicitly; this is independent of whether recommendations are used or not. It also shows that goal-based structuring has more influence on how easy it is for users to find interesting information than recommendations
- ā¦