181 research outputs found

    From moral concern to moral constraint

    Get PDF
    Current research into the neural basis of moral decision-making endorses a common theme: The mechanisms we use to make value-guided decisions concerning each other are remarkably similar to those we use to make value-guided decisions for ourselves. In other words, moral decisions are just another kind of ordinary decision. Yet, there is something unsettling about this conclusion: We often feel as if morality places an absolute constraint on our behavior, in a way unlike ordinary personal concerns. What is the neural and psychological basis of this feeling of moral constraint? Several models are considered and outstanding questions highlighted. Moral decisions are hard to make and fun to study. Suppose a woman notices $20 laying by the shoes of a stranger at the front of a checkout line. Her eyes linger on the orphaned bill. Will she point out the money to the customer who may have dropped it, or wait a moment until it can be discreetly pocketed? Watching this moment of uncertainty imparts a vicarious thrill because, to varying degrees, her competing motives are shared by us all. Psychology and neuroscience have much to say about her motive to keep the money. In fact, the integration of computational, neurobiological and psychological models to explain value-guided learning and choice stands out as one of the foremost accomplishments of contemporary behavioral research Attempts to define morality typically focus on two candidate features. The first is concern for others' welfare, which is emphasized in utilitarian or consequentialist philosophical theories. The second key feature is the concept of an absolute constraint, rule or law. This approach finds its philosophical apogee in the work of Kant. Following the lead of some philosophers, we could seek to refine a single and exact definition of the moral domain. This is a promising avenue if we wish to spend centuries gridlocked in intractable and often arcane debate. Recently, however, psychologists have charted a different course by arguing that moral cognition comprises multiple distinct but interrelated mechanisms Thus, research into the neuroscience of morality faces at least two big questions. First, what mechanisms acquire and encode moral concern: the value of others' welfare, ultimately allowing us to make decisions that flexibly trade off between interests when they collide? Second, what mechanisms acquire and encode the sense of moral constraint: the representation and value of a moral rule, or law? We have an impressive grip on the first issue, but are startling empty-handed on the second. Moral concern There are two principle literatures on the neuroscience of other-oriented concern. One interrogates the neural substrates of the perception of pain or reward in othersthat is, the basis of empathy. The second interrogates the neural substrates of decision-making on behalf of others. Both of these literatures converge on a common conclusion: The mechanisms we use to encode value and make decisions for ourselves are largely overlapping with those we use for others. The affective experience of pain or otherwise unpleasant experience activates a characteristic network of brain regions including anterior cingulate cortex and anterior insula, along with brainstem and regions of the cerebellum. Numerous studies show a similar network of activation (although not perfectly identical) when people observe pain in others Moral constraint In contrast to the well-developed literature on welfare concerns, we know little about how the brain represents moral rules as absolute constraints on behavior. Current research does, however, offer two promising approaches. One possibility is that our sense of inviolable moral rules comes from a unique kind of value representation principally designed to guide our own decision-making. Another possibility is that moral rules are grounded in psychological mechanisms principally designed to judge the actions of others. Model-free moral values A dominant theme of research in the last decade is that our sense of moral constraint derives from a unique kind of value representation -that strong rules are grounded in strong feelings. According to one early and influential proposal, the dual process model, controlled cognitive processes are responsible for utilitarian-like assignment of value to welfare while affective processes are responsible for the sense of inviolable constraint on 'up-close and personal' harms Two recent proposals attempt to translate this insight into the language of contemporary computational cognitive models of decision-making A key test for model-based versus model-free control is to assess whether a person continues to value an action even when it's connection to reward has been broken. A modelbased system immediately devalues the action because it plays no productive role in maximizing expected outcomes, whereas a model-free learning system continues to assign value to the action based on its prior history of reward. In this sense, model-free algorithms assign value directly to actions, whereas model-based algorithms assign value to outcomes and then derive action values via online planning. Many moral norms exhibit this signature property of model-free valuation. For instance, some American travelers feel compelled to tip foreign waiters 20% even when there is no such local norm. Presumably this does not reflect an underlying concern for the relevant outcome (well-funded foreign waitstaffs), but rather the habit-like internalization of an action-based value: Good service requires a tip. Indeed, evidence suggests that such altruistic actions are supported by internalized norms deployed automatically Research on habit learning has centered largely on the computational role of dopaminergic targets in the basal ganglia. Current neuropsychological research provides little association, however, between abnormal moral behavior and insult to the basal ganglia. Moreover, motor habits triggered by the basal ganglia are typically not accompanied by the subjective experience of value in the way that morals are: Tying your shoes feels automatic, but not desperately important. A more likely candidate for the encoding of action-based moral values is the ventromedial prefrontal cortex (vmPFC) There is, however, one major shortcoming of using model-free value assignment as basis for understanding our sense of 'moral constraint' as inviolable or absolute: These values are designed precisely in order to trade off against each other. Put colloquially, the application of reinforcement learning principles to the moral domain can help us to understand why murder always feels highly undesirable, but it is challenged to explain why murder would ever feel strictly forbidden (for instance, when the alternative is associated with an even less desirable model-free value). There are three ways out of this dilemma. One is to insist that the value assignments to moral concerns are simply very, very strong -so strong that they feel like inviolable constraints. The second is to suppose that a moral rule ('murder is wrong') feels inviolable not because the value assigned to it is extremely great, but rather because the content of the rule takes a law-like form. The third possibility is that our sense of inviolability comes from somewhere else entirely. These possibilities are not strictly exclusive of each other, and each deserves further research. Third-party evaluation One of the reasons that moral rules might feel inviolable is because we apply them universally -not just to ourselves, but to others. A rich tradition of psychological research maps the criteria we use to decide whether others have acted rightly or wrongly. Two particular criteria play a foundational role: Who did what (i.e., the causal role that a person plays in bring about harm), and whether they meant to (i.e., their intent or foresight of that harm) Intent-based moral judgment depends on a network of brain regions that have long been implicated in mental state reasoning, including medial prefrontal cortex (MPFC), posterior cingulate and right and left temporoparietal junction (TPJ) In contrast, research into the neural basis of the 'harm/ causation' criterion is underdeveloped. At least two studies suggest that the amygdala may play a key role in encoding the negative value associated with harmful outcomes [5 ,44]. It is less clear what neural substrates contribute to the perception of moral responsibility: The causal link between an agent and a harm that supports our sense of condemnation and revenge. Some evidence indicates a role for the frontoparietal control network, and especially the dorsolateral prefrontal cortex A distinct line of research, however, provides some support for the application of common mechanisms to thirdparty judgment and first-person decision-making. In addition to condemning harmful action, people also condemn actions that are unfair. Studies of responder behavior in the ultimatum game find that the anterior insula (AI) responds more to unfair offers than to fair offers, and that responses of a greater magnitude are associated with an increased likelihood of spiteful punishment Moral rules How can cognitive neuroscience address the origin and application of moral rules? As this review attests, we have made great progress by treating non-moral cognition as a blueprint that exhaustively details the constituent mechanisms available to moral cognition. But, we may need to think of non-moral cognition not as a complete blueprint, but instead as an underlying scaffold: A framework of common elements that supports a structure of more unique design. What kind of structure are we looking for? We have tended to take as our object of study the moral decision: A determination of what to do, whom to trust, what is wrong, and so forth. Perhaps it is an apt moment to introduce an additional object of study: moral rules. This would position us to understand morality not only as the collection of concerns, but also a source of constraint

    Impulsive Choice and Altruistic Punishment Are Correlated and Increase in Tandem With Serotonin Depletion

    Get PDF
    Human cooperation may partly depend on the presence of individuals willing to incur personal costs to punish noncooperators. The psychological factors that motivate such 'altruistic punishment' are not fully understood; some have argued that altruistic punishment is a deliberate act of norm enforcement that requires self-control, while others claim that it is an impulsive act driven primarily by emotion. In the current study, we addressed this question by examining the relationship between impulsive choice and altruistic punishment in the ultimatum game. As the neurotransmitter serotonin has been implicated in both impulsive choice and altruistic punishment, we investigated the effects of manipulating serotonin on both measures. Across individuals, impulsive choice and altruistic punishment were correlated and increased following serotonin depletion. These findings imply that altruistic punishment reflects the absence rather than the presence of self control, and suggest that impulsive choice and altruistic punishment share common neural mechanisms

    The influence of social preferences and reputational concerns on intergroup prosocial behaviour in gains and losses contexts

    Get PDF
    To what extent do people help ingroup members based on a social preference to improve ingroup members’ outcomes, versus strategic concerns about preserving their reputation within their group? And do these motives manifest differently when a prosocial behaviour occurs in the context of helping another gain a positive outcome (study 1), versus helping another to avoid losing a positive outcome (study 2)? In both contexts, we find that participants are more prosocial towards ingroup (versus outgroup members) and more prosocial when decisions are public (versus private) but find no interaction between group membership and either anonymity of the decision or expected economic value of helping. Therefore, consistent with a preference-based account of ingroup favouritism, people appear to prefer to help ingroup members more than outgroup members, regardless of whether helping can improve their reputation within their group. Moreover, this preference to help ingroup members appears to take the form of an intuitive social heuristic to help ingroup members, regardless of the economic incentives or possibility of reputation management. Theoretical and practical implications for the study of intergroup prosocial behaviour are discussed

    Neural and Cognitive Signatures of Guilt Predict Hypocritical Blame

    Get PDF
    A common form of moral hypocrisy occurs when people blame others for moral violations that they themselves commit. It is assumed that hypocritical blamers act in this manner to falsely signal that they hold moral standards that they do not really accept. We tested this assumption by investigating the neurocognitive processes of hypocritical blamers during moral decision-making. Participants (62 adult UK residents; 27 males) underwent functional MRI scanning while deciding whether to profit by inflicting pain on others and then judged the blameworthiness of others’ identical decisions. Observers (188 adult U.S. residents; 125 males) judged participants who blamed others for making the same harmful choice to be hypocritical, immoral, and untrustworthy. However, analyzing hypocritical blamers’ behaviors and neural responses shows that hypocritical blame was positively correlated with conflicted feelings, neural responses to moral standards, and guilt-related neural responses. These findings demonstrate that hypocritical blamers may hold the moral standards that they apply to others.<br/

    Vaccine Nationalism Counterintuitively Erodes Public Trust in Leaders

    Get PDF
    Global access to resources like vaccines is key for containing the spread of infectious diseases. However, wealthy countries often pursue nationalistic policies, stockpiling doses rather than redistributing them globally. One possible motivation behind vaccine nationalism is a belief among policymakers that citizens will mistrust leaders who prioritize global needs over domestic protection. In seven experiments (total N=4215), we demonstrate that such concerns are misplaced: nationally representative samples across multiple countries with large vaccine surpluses (Australia, Canada, U.K., and U.S.) trusted redistributive leaders more than nationalistic leaders — even the more nationalistic participants. This preference generalized across different diseases, and manifested in both self-reported and behavioral measures of trust. Professional civil servants however had the opposite intuition and predicted higher trust in nationalistic leaders, and a non-expert sample also failed to predict higher trust in redistributive leaders. We discuss how policymakers’ inaccurate intuitions might originate from overestimating others’ self-interest

    Neural mechanisms for learning self and other ownership

    Get PDF
    The sense of ownership – of which objects belong to us and which to others - is an important part of our lives, but how the brain keeps track of ownership is poorly understood. Here, the authors show that specific brain areas are involved in ownership acquisition for the self, friends, and strangers

    Dreading the pain of others? Altruistic responses to others' pain underestimate dread

    Get PDF
    A dislike of waiting for pain, aptly termed ‘dread’, is so great that people will increase pain to avoid delaying it. However, despite many accounts of altruistic responses to pain in others, no previous studies have tested whether people take delay into account when attempting to ameliorate others' pain. We examined the impact of delay in 2 experiments where participants (total N = 130) specified the intensity and delay of pain either for themselves or another person. Participants were willing to increase the experimental pain of another participant to avoid delaying it, indicative of dread, though did so to a lesser extent than was the case for their own pain. We observed a similar attenuation in dread when participants chose the timing of a hypothetical painful medical treatment for a close friend or relative, but no such attenuation when participants chose for a more distant acquaintance. A model in which altruism is biased to privilege pain intensity over the dread of pain parsimoniously accounts for these findings. We refer to this underestimation of others' dread as a ‘Dread Empathy Gap’

    Serotonin depletion impairs both Pavlovian and instrumental reversal learning in healthy humans.

    Get PDF
    Funder: Gates Cambridge Trust; doi: https://doi.org/10.13039/501100005370Funder: DH | National Institute for Health Research (NIHR); doi: https://doi.org/10.13039/501100000272Serotonin is involved in updating responses to changing environmental circumstances. Optimising behaviour to maximise reward and minimise punishment may require shifting strategies upon encountering new situations. Likewise, autonomic responses to threats are critical for survival yet must be modified as danger shifts from one source to another. Whilst numerous psychiatric disorders are characterised by behavioural and autonomic inflexibility, few studies have examined the contribution of serotonin in humans. We modelled both processes, respectively, in two independent experiments (N = 97). Experiment 1 assessed instrumental (stimulus-response-outcome) reversal learning whereby individuals learned through trial and error which action was most optimal for obtaining reward or avoiding punishment initially, and the contingencies subsequently reversed serially. Experiment 2 examined Pavlovian (stimulus-outcome) reversal learning assessed by the skin conductance response: one innately threatening stimulus predicted receipt of an uncomfortable electric shock and another did not; these contingencies swapped in a reversal phase. Upon depleting the serotonin precursor tryptophan-in a double-blind randomised placebo-controlled design-healthy volunteers showed impairments in updating both actions and autonomic responses to reflect changing contingencies. Reversal deficits in each domain, furthermore, were correlated with the extent of tryptophan depletion. Initial Pavlovian conditioning, moreover, which involved innately threatening stimuli, was potentiated by depletion. These results translate findings in experimental animals to humans and have implications for the neurochemical basis of cognitive inflexibility.NIH
    corecore