6,069 research outputs found
Recommended from our members
Player experience and deceptive expectations of difficulty adaptation in digital games
Increasingly, digital games are including adaptive features that adjust the level of difficulty to match the skills of individual players. The intention is to improve and prolong the player experience by allowing the player to have the feeling of challenge without it being overwhelming and leading to repeated failure and frustration. Previous work has shown that player experience is indeed improved by such adaptations but also that the player experience can be improved by simply claiming such an adaptation is present even when it is not. It is therefore possible that claims about adaptations and the actual adaptations could interact and not lead to the intended outcomes for the players or worse disappoint players. This paper reports on two studies that were conducted to experimentally investigate the interaction between game adaptations and player information about adaptations on the player experience, specifically their sense of immersion in the game. For this, two games were developed using two different kinds of adaptations to adjust difficulty based on playersâ performance in the game. Participants were provided with information about game adaptations independently of whether the adaptations were present. The results suggest that players felt more immersed in the game when told that the game adapts to them, regardless of whether the adaptation was present in the game or not. This effect was observed in both games despite their different adaptations and it remained prominent even during longer gaming sessions. These findings demonstrate that playersâ knowledge of adaptations influences their experience independently of adaptations. In this particular context, the knowledge reinforced the experience of the adaptations. This suggests that, at least in some circumstances, developers do not need to be concerned about negative effects of telling players about in-game adaptations
Recommended from our members
Smile asymmetries and reputation as reliable indicators of likelihood to cooperate: An evolutionary analysis
Cooperating with individuals whose altruism is not motivated by genuine prosocial emotions could have been costly in ancestral division of labour partnerships. How do humans âknowâ whether or not an individual has the prosocial emotions committing future cooperation? Frank (1988) has hypothesized two pathways for altruist-detection: (a) facial expressions of emotions signalling character; and (b) gossip regarding the target individualâs reputation. Detecting non-verbal cues signalling commitment to cooperate may be one way to avoid the costs of exploitation. Spontaneous smiles while cooperating may be reliable index cues because of the physiological constraints controlling the neural pathways mediating involuntary emotional expressions. Specifically, it is hypothesized that individuals whose help is mediated by a genuine sympathy will express involuntary smiles (which are observably different from posed smiles). To investigate this idea, 38 participants played dictator games (i.e. a unilateral resource allocation task) against cartoon faces with a benevolent emotional expression (i.e. concern furrows and smile). The faces were presented with information regarding reputation (e.g. descriptions of an altruistic character vs. a non-altruistic character). Half of the sample played against icons with symmetrical smiles (representing a spontaneous smile) while the other half played against asymmetrically smiling icons (representing a posed smile). Icons described as having altruistic motives received more resources than icons described as self-interested helpers. Faces with symmetrical smiles received more resources than faces with asymmetrical smiles. These results suggest that reputation and smile asymmetry influence the likelihood of cooperation and thus may be reliable cues to altruism. These cues may allow for altruists to garner more resources in division of labour situations
Perverse effects of other-referenced performance goals in an information exchange context
A values-centered leadership model comprised of leader stakeholder and economic values, follower values congruence, and responsible leadership outcomes was tested using data from 122 organizational leaders and 458 of their direct reports. Alleviating same-source bias concerns in leadership survey research, follower ratings of leadership style and follower ratings of values congruence and responsible leadership outcomes were collected from separate sources via the split-sample methodology. Results of structural equation modeling analyses demonstrated that leader stakeholder values predicted transformational leadership, whereas leader economic values were associated with transactional leadership. Follower values congruence was strongly associated with transformational leadership, unrelated to transactional leadership, and partially mediated the relationships between transformational leadership and both follower organizational citizenship behaviors and follower beliefs in the stakeholder view of corporate social responsibility. Implications for responsible leadership and transformational leadership theory, practice, and future research are discussed
Are Liars Ethical? On the Tension between Benevolence and Honesty
We demonstrate that some lies are perceived to be more ethical than honest statements. Across three studies, we find that individuals who tell prosocial lies, lies told with the intention of benefitting others, are perceived to be more moral than individuals who tell the truth. In Study 1, we compare altruistic lies to selfish truths. In Study 2, we introduce a stochastic deception game to disentangle the influence of deception, outcomes, and intentions on perceptions of moral character. In Study 3, we demonstrate that moral judgments of lies are sensitive to the consequences of lying for the deceived party, but insensitive to the consequences of lying for the liar. Both honesty and benevolence are essential components of moral character. We find that when these values conflict, benevolence may be more important than honesty. More broadly, our findings suggest that the moral foundation of care may be more important than the moral foundation of justice
Prosocial Lies: When Deception Breeds Trust
Philosophers, psychologists, and economists have long asserted that deception harms trust. We challenge this claim. Across four studies, we demonstrate that deception can increase trust. Specifically, prosocial lies increase the willingness to pass money in the trust game, a behavioral measure of benevolence-based trust. In Studies 1a and 1b, we find that altruistic lies increase trust when deception is directly experienced and when it is merely observed. In Study 2, we demonstrate that mutually beneficial lies also increase trust. In Study 3, we disentangle the effects of intentions and deception; intentions are far more important than deception for building benevolence-based trust. In Study 4, we examine how prosocial lies influence integrity-based trust. We introduce a new economic game, the Rely-or-Verifygame, to measure integrity-based trust. Prosocial lies increase benevolence-based trust, but harm integrity-based trust. Our findings expand our understanding of deception and deepen our insight into the mechanics of trust
Navigating the Tension Between Benevolence and Honesty: Essays on the Consequences of Prosocial Lies
Many of our most common and difficult ethical dilemmas involve balancing honesty and benevolence. For example, when we deliver unpleasant news, such as negative feedback or terminal prognoses, we face an implicit tradeoff between being completely honest and being completely kind. Using a variety of research methods, in both the laboratory and the field, I study how individuals navigate this tension. Each chapter in this dissertation addresses the tension between honesty and benevolence at a different level. In Chapters One and Two, I examine how honesty and benevolence influence moral judgment. In Chapter Three, I explore how honesty and benevolence influence interpersonal trust. In Chapter Four, I explore how honesty and benevolence influence psychological well-being. Finally, in Chapter Five, I examine how different stakeholders view tradeoffs between honesty and benevolence in an important domain: healthcare. Across these chapters, I identify three key themes. First, for moral judgment and interpersonal trust, benevolence is often more important than honesty. As a result, those who prioritize benevolence over honesty by telling prosocial lies, lies that are intended to help others, are deemed to be moral and trustworthy. Second, despite philosophersâ assumption that individuals would rarely consent to deception, I demonstrate that individuals frequently want to be deceived. Individuals want others to deceive them when it protects them from harm. This desire manifests itself in systematic circumstances and during individualsâ most fragile moments. Third, honesty and benevolence are associated with interpersonal and intrapersonal tradeoffs. Although benevolence seems to be more central for interpersonal judgments and relationships, honesty seems to be more central for creating personal meaning. Throughout these chapters, I discuss the implications of these findings for the study of ethics, organizational behavior, and interpersonal communication
Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making
The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep âhumans in the loopâ (âHITLâ). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related valuesâlegitimacy, dignity, and so forthâare vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority?
Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating âbehind the curtain.â Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation
Trust and deception in multi-agent trading systems: a logical viewpoint
Trust and deception have been of concern to researchers since the earliest research into multi-agent trading systems (MATS). In an open trading environment, trust can be established by external mechanisms e.g. using secret keys or digital signatures or by internal mechanisms e.g. learning and reasoning from experience. However, in a MATS, where distrust exists among the agents, and deception might be used between agents, how to recognize and remove fraud and deception in MATS becomes a significant issue in order to maintain a trustworthy MATS environment. This paper will propose an architecture for a multi-agent trading system (MATS) and explore how fraud and deception changes the trust required in a multi-agent trading system/environment. This paper will also illustrate several forms of logical reasoning that involve trust and deception in a MATS. The research is of significance in deception recognition and trust sustainability in e-business and e-commerce
Paternalist deception in the Lotus sƫtra: A normative assessment
The Lotus SĆ«tra repeatedly asserts the moral permissibility, in certain circumstances, of deceiving others for their own benefit. The examples it uses to illustrate this view have the features of weak paternalism, but the real-world applications it endorses would today be considered strong paternalism. We can explain this puzzling feature of the text by noting that according to Mahayana Buddhists, normal, ordinary people are so irrational that they are relevantly similar to the insane. Kant\u27s determined anti-paternalism, by contrast, relies on an obligation to see others as rational, which can be read in several ways. Recent work in psychology provides support for the Lotus SĆ«tra\u27s philosophical anthropology while undermining the plausibility of Kant\u27s version. But this result does not necessarily lead to an endorsement of political paternalism, since politicians are not qualified to wield such power. Some spiritual teachers, however, may be mo-rally permitted to benefit their students by deceiving them
- âŠ