6,069 research outputs found

    Perverse effects of other-referenced performance goals in an information exchange context

    Get PDF
    A values-centered leadership model comprised of leader stakeholder and economic values, follower values congruence, and responsible leadership outcomes was tested using data from 122 organizational leaders and 458 of their direct reports. Alleviating same-source bias concerns in leadership survey research, follower ratings of leadership style and follower ratings of values congruence and responsible leadership outcomes were collected from separate sources via the split-sample methodology. Results of structural equation modeling analyses demonstrated that leader stakeholder values predicted transformational leadership, whereas leader economic values were associated with transactional leadership. Follower values congruence was strongly associated with transformational leadership, unrelated to transactional leadership, and partially mediated the relationships between transformational leadership and both follower organizational citizenship behaviors and follower beliefs in the stakeholder view of corporate social responsibility. Implications for responsible leadership and transformational leadership theory, practice, and future research are discussed

    Are Liars Ethical? On the Tension between Benevolence and Honesty

    Get PDF
    We demonstrate that some lies are perceived to be more ethical than honest statements. Across three studies, we find that individuals who tell prosocial lies, lies told with the intention of benefitting others, are perceived to be more moral than individuals who tell the truth. In Study 1, we compare altruistic lies to selfish truths. In Study 2, we introduce a stochastic deception game to disentangle the influence of deception, outcomes, and intentions on perceptions of moral character. In Study 3, we demonstrate that moral judgments of lies are sensitive to the consequences of lying for the deceived party, but insensitive to the consequences of lying for the liar. Both honesty and benevolence are essential components of moral character. We find that when these values conflict, benevolence may be more important than honesty. More broadly, our findings suggest that the moral foundation of care may be more important than the moral foundation of justice

    Prosocial Lies: When Deception Breeds Trust

    Get PDF
    Philosophers, psychologists, and economists have long asserted that deception harms trust. We challenge this claim. Across four studies, we demonstrate that deception can increase trust. Specifically, prosocial lies increase the willingness to pass money in the trust game, a behavioral measure of benevolence-based trust. In Studies 1a and 1b, we find that altruistic lies increase trust when deception is directly experienced and when it is merely observed. In Study 2, we demonstrate that mutually beneficial lies also increase trust. In Study 3, we disentangle the effects of intentions and deception; intentions are far more important than deception for building benevolence-based trust. In Study 4, we examine how prosocial lies influence integrity-based trust. We introduce a new economic game, the Rely-or-Verifygame, to measure integrity-based trust. Prosocial lies increase benevolence-based trust, but harm integrity-based trust. Our findings expand our understanding of deception and deepen our insight into the mechanics of trust

    Navigating the Tension Between Benevolence and Honesty: Essays on the Consequences of Prosocial Lies

    Get PDF
    Many of our most common and difficult ethical dilemmas involve balancing honesty and benevolence. For example, when we deliver unpleasant news, such as negative feedback or terminal prognoses, we face an implicit tradeoff between being completely honest and being completely kind. Using a variety of research methods, in both the laboratory and the field, I study how individuals navigate this tension. Each chapter in this dissertation addresses the tension between honesty and benevolence at a different level. In Chapters One and Two, I examine how honesty and benevolence influence moral judgment. In Chapter Three, I explore how honesty and benevolence influence interpersonal trust. In Chapter Four, I explore how honesty and benevolence influence psychological well-being. Finally, in Chapter Five, I examine how different stakeholders view tradeoffs between honesty and benevolence in an important domain: healthcare. Across these chapters, I identify three key themes. First, for moral judgment and interpersonal trust, benevolence is often more important than honesty. As a result, those who prioritize benevolence over honesty by telling prosocial lies, lies that are intended to help others, are deemed to be moral and trustworthy. Second, despite philosophers’ assumption that individuals would rarely consent to deception, I demonstrate that individuals frequently want to be deceived. Individuals want others to deceive them when it protects them from harm. This desire manifests itself in systematic circumstances and during individuals’ most fragile moments. Third, honesty and benevolence are associated with interpersonal and intrapersonal tradeoffs. Although benevolence seems to be more central for interpersonal judgments and relationships, honesty seems to be more central for creating personal meaning. Throughout these chapters, I discuss the implications of these findings for the study of ethics, organizational behavior, and interpersonal communication

    Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making

    Get PDF
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the scholarship thus far: does it matter if humans appear to be in the loop of decision-making, independent from whether they actually are? In other words, what is stake in the disjunction between whether humans in fact have ultimate authority over decision-making versus whether humans merely seem, from the outside, to have such authority? Our argument proceeds in four parts. First, we build our formal model, enriching the HITL question to include not only whether humans are actually in the loop of decision-making, but also whether they appear to be so. Second, we describe situations in which the actuality and appearance of HITL align: those that seem to involve human judgment and actually do, and those that seem automated and actually are. Third, we explore instances of misalignment: situations in which systems that seem to involve human judgment actually do not, and situations in which systems that hold themselves out as automated actually rely on humans operating “behind the curtain.” Fourth, we examine the normative issues that result from HITL misalignment, arguing that it challenges individual decision-making about automated systems and complicates collective governance of automation

    Coordination in software agent systems

    Get PDF

    Trust and deception in multi-agent trading systems: a logical viewpoint

    Get PDF
    Trust and deception have been of concern to researchers since the earliest research into multi-agent trading systems (MATS). In an open trading environment, trust can be established by external mechanisms e.g. using secret keys or digital signatures or by internal mechanisms e.g. learning and reasoning from experience. However, in a MATS, where distrust exists among the agents, and deception might be used between agents, how to recognize and remove fraud and deception in MATS becomes a significant issue in order to maintain a trustworthy MATS environment. This paper will propose an architecture for a multi-agent trading system (MATS) and explore how fraud and deception changes the trust required in a multi-agent trading system/environment. This paper will also illustrate several forms of logical reasoning that involve trust and deception in a MATS. The research is of significance in deception recognition and trust sustainability in e-business and e-commerce

    Paternalist deception in the Lotus sƫtra: A normative assessment

    Get PDF
    The Lotus SĆ«tra repeatedly asserts the moral permissibility, in certain circumstances, of deceiving others for their own benefit. The examples it uses to illustrate this view have the features of weak paternalism, but the real-world applications it endorses would today be considered strong paternalism. We can explain this puzzling feature of the text by noting that according to Mahayana Buddhists, normal, ordinary people are so irrational that they are relevantly similar to the insane. Kant\u27s determined anti-paternalism, by contrast, relies on an obligation to see others as rational, which can be read in several ways. Recent work in psychology provides support for the Lotus SĆ«tra\u27s philosophical anthropology while undermining the plausibility of Kant\u27s version. But this result does not necessarily lead to an endorsement of political paternalism, since politicians are not qualified to wield such power. Some spiritual teachers, however, may be mo-rally permitted to benefit their students by deceiving them
    • 

    corecore