66 research outputs found

    Aid in the Aftermath of Hurricane Katrina: Inferences of Secondary Emotions and Intergroup Helping

    Full text link
    This research examines inferences about the emotional states of ingroup and outgroup victims after a natural disaster, and whether these inferences predict intergroup helping. Two weeks after Hurricane Katrina struck the southern United States, White and non-White participants were asked to infer the emotional states of an individualized Black or White victim, and were asked to report their intentions to help such victims. Overall, participants believed that an outgroup victim experienced fewer secondary, ‘uniquely human’ emotions (e.g. anguish, mourning, remorse) than an ingroup victim. The extent to which participants did infer secondary emotions about outgroup victims, however, predicted their helping intentions; in other words, those participants who did not dehumanize outgroup victims were the individuals most likely to report intentions to volunteer for hurricane relief efforts. This investigation extends prior research by: (1) demonstrating infraglobalhumanization of individualized outgroup members (as opposed to aggregated outgroups); (2) examining infrahumanization via inferred emotional states (as opposed to attributions of emotions as stereotypic traits); and (3) identifying a relationship between infra-humanization of outgroup members and reduced intergroup helping

    From moral concern to moral constraint

    Get PDF
    Current research into the neural basis of moral decision-making endorses a common theme: The mechanisms we use to make value-guided decisions concerning each other are remarkably similar to those we use to make value-guided decisions for ourselves. In other words, moral decisions are just another kind of ordinary decision. Yet, there is something unsettling about this conclusion: We often feel as if morality places an absolute constraint on our behavior, in a way unlike ordinary personal concerns. What is the neural and psychological basis of this feeling of moral constraint? Several models are considered and outstanding questions highlighted. Moral decisions are hard to make and fun to study. Suppose a woman notices $20 laying by the shoes of a stranger at the front of a checkout line. Her eyes linger on the orphaned bill. Will she point out the money to the customer who may have dropped it, or wait a moment until it can be discreetly pocketed? Watching this moment of uncertainty imparts a vicarious thrill because, to varying degrees, her competing motives are shared by us all. Psychology and neuroscience have much to say about her motive to keep the money. In fact, the integration of computational, neurobiological and psychological models to explain value-guided learning and choice stands out as one of the foremost accomplishments of contemporary behavioral research Attempts to define morality typically focus on two candidate features. The first is concern for others' welfare, which is emphasized in utilitarian or consequentialist philosophical theories. The second key feature is the concept of an absolute constraint, rule or law. This approach finds its philosophical apogee in the work of Kant. Following the lead of some philosophers, we could seek to refine a single and exact definition of the moral domain. This is a promising avenue if we wish to spend centuries gridlocked in intractable and often arcane debate. Recently, however, psychologists have charted a different course by arguing that moral cognition comprises multiple distinct but interrelated mechanisms Thus, research into the neuroscience of morality faces at least two big questions. First, what mechanisms acquire and encode moral concern: the value of others' welfare, ultimately allowing us to make decisions that flexibly trade off between interests when they collide? Second, what mechanisms acquire and encode the sense of moral constraint: the representation and value of a moral rule, or law? We have an impressive grip on the first issue, but are startling empty-handed on the second. Moral concern There are two principle literatures on the neuroscience of other-oriented concern. One interrogates the neural substrates of the perception of pain or reward in othersthat is, the basis of empathy. The second interrogates the neural substrates of decision-making on behalf of others. Both of these literatures converge on a common conclusion: The mechanisms we use to encode value and make decisions for ourselves are largely overlapping with those we use for others. The affective experience of pain or otherwise unpleasant experience activates a characteristic network of brain regions including anterior cingulate cortex and anterior insula, along with brainstem and regions of the cerebellum. Numerous studies show a similar network of activation (although not perfectly identical) when people observe pain in others Moral constraint In contrast to the well-developed literature on welfare concerns, we know little about how the brain represents moral rules as absolute constraints on behavior. Current research does, however, offer two promising approaches. One possibility is that our sense of inviolable moral rules comes from a unique kind of value representation principally designed to guide our own decision-making. Another possibility is that moral rules are grounded in psychological mechanisms principally designed to judge the actions of others. Model-free moral values A dominant theme of research in the last decade is that our sense of moral constraint derives from a unique kind of value representation -that strong rules are grounded in strong feelings. According to one early and influential proposal, the dual process model, controlled cognitive processes are responsible for utilitarian-like assignment of value to welfare while affective processes are responsible for the sense of inviolable constraint on 'up-close and personal' harms Two recent proposals attempt to translate this insight into the language of contemporary computational cognitive models of decision-making A key test for model-based versus model-free control is to assess whether a person continues to value an action even when it's connection to reward has been broken. A modelbased system immediately devalues the action because it plays no productive role in maximizing expected outcomes, whereas a model-free learning system continues to assign value to the action based on its prior history of reward. In this sense, model-free algorithms assign value directly to actions, whereas model-based algorithms assign value to outcomes and then derive action values via online planning. Many moral norms exhibit this signature property of model-free valuation. For instance, some American travelers feel compelled to tip foreign waiters 20% even when there is no such local norm. Presumably this does not reflect an underlying concern for the relevant outcome (well-funded foreign waitstaffs), but rather the habit-like internalization of an action-based value: Good service requires a tip. Indeed, evidence suggests that such altruistic actions are supported by internalized norms deployed automatically Research on habit learning has centered largely on the computational role of dopaminergic targets in the basal ganglia. Current neuropsychological research provides little association, however, between abnormal moral behavior and insult to the basal ganglia. Moreover, motor habits triggered by the basal ganglia are typically not accompanied by the subjective experience of value in the way that morals are: Tying your shoes feels automatic, but not desperately important. A more likely candidate for the encoding of action-based moral values is the ventromedial prefrontal cortex (vmPFC) There is, however, one major shortcoming of using model-free value assignment as basis for understanding our sense of 'moral constraint' as inviolable or absolute: These values are designed precisely in order to trade off against each other. Put colloquially, the application of reinforcement learning principles to the moral domain can help us to understand why murder always feels highly undesirable, but it is challenged to explain why murder would ever feel strictly forbidden (for instance, when the alternative is associated with an even less desirable model-free value). There are three ways out of this dilemma. One is to insist that the value assignments to moral concerns are simply very, very strong -so strong that they feel like inviolable constraints. The second is to suppose that a moral rule ('murder is wrong') feels inviolable not because the value assigned to it is extremely great, but rather because the content of the rule takes a law-like form. The third possibility is that our sense of inviolability comes from somewhere else entirely. These possibilities are not strictly exclusive of each other, and each deserves further research. Third-party evaluation One of the reasons that moral rules might feel inviolable is because we apply them universally -not just to ourselves, but to others. A rich tradition of psychological research maps the criteria we use to decide whether others have acted rightly or wrongly. Two particular criteria play a foundational role: Who did what (i.e., the causal role that a person plays in bring about harm), and whether they meant to (i.e., their intent or foresight of that harm) Intent-based moral judgment depends on a network of brain regions that have long been implicated in mental state reasoning, including medial prefrontal cortex (MPFC), posterior cingulate and right and left temporoparietal junction (TPJ) In contrast, research into the neural basis of the 'harm/ causation' criterion is underdeveloped. At least two studies suggest that the amygdala may play a key role in encoding the negative value associated with harmful outcomes [5 ,44]. It is less clear what neural substrates contribute to the perception of moral responsibility: The causal link between an agent and a harm that supports our sense of condemnation and revenge. Some evidence indicates a role for the frontoparietal control network, and especially the dorsolateral prefrontal cortex A distinct line of research, however, provides some support for the application of common mechanisms to thirdparty judgment and first-person decision-making. In addition to condemning harmful action, people also condemn actions that are unfair. Studies of responder behavior in the ultimatum game find that the anterior insula (AI) responds more to unfair offers than to fair offers, and that responses of a greater magnitude are associated with an increased likelihood of spiteful punishment Moral rules How can cognitive neuroscience address the origin and application of moral rules? As this review attests, we have made great progress by treating non-moral cognition as a blueprint that exhaustively details the constituent mechanisms available to moral cognition. But, we may need to think of non-moral cognition not as a complete blueprint, but instead as an underlying scaffold: A framework of common elements that supports a structure of more unique design. What kind of structure are we looking for? We have tended to take as our object of study the moral decision: A determination of what to do, whom to trust, what is wrong, and so forth. Perhaps it is an apt moment to introduce an additional object of study: moral rules. This would position us to understand morality not only as the collection of concerns, but also a source of constraint

    On wealth and the diversity of friendships: High social class people around the world have fewer international friends

    Get PDF
    This is the final published version. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.paid.2015.07.040Having international social ties carries many potential advantages, including access to novel ideas and greater commercial opportunities. Yet little is known about who forms more international friendships. Here, we propose social class plays a key role in determining people’s internationalism. We conducted two studies to test whether social class is related positively to internationalism (the building social class hypothesis) or negatively to internationalism (the restricting social class hypothesis). In Study 1, we found that among individuals in the United States, social class was negatively related to percentage of friends on Facebook that are outside the United States. In Study 2, we extended these findings to the global level by analyzing country-level data on Facebook friends formed in 2011 (nearly 50 billion friendships) across 187 countries. We found that people from higher social class countries (as indexed by GDP per capita) had lower levels of internationalism—that is, they made more friendships domestically than abroad

    Speed breeding is a powerful tool to accelerate crop research and breeding

    Get PDF
    The growing human population and a changing environment have raised significant concern for global food security, with the current improvement rate of several important crops inadequate to meet future demand1. This slow improvement rate is attributed partly to the long generation times of crop plants. Here, we present a method called ‘speed breeding’, which greatly shortens generation time and accelerates breeding and research programmes. Speed breeding can be used to achieve up to 6 generations per year for spring wheat (Triticum aestivum), durum wheat (T. durum), barley (Hordeum vulgare), chickpea (Cicer arietinum) and pea (Pisum sativum), and 4 generations for canola (Brassica napus), instead of 2–3 under normal glasshouse conditions. We demonstrate that speed breeding in fully enclosed, controlled-environment growth chambers can accelerate plant development for research purposes, including phenotyping of adult plant traits, mutant studies and transformation. The use of supplemental lighting in a glasshouse environment allows rapid generation cycling through single seed descent (SSD) and potential for adaptation to larger-scale crop improvement programs. Cost saving through light-emitting diode (LED) supplemental lighting is also outlined. We envisage great potential for integrating speed breeding with other modern crop breeding technologies, including high-throughput genotyping, genome editing and genomic selection, accelerating the rate of crop improvement

    Effects of antiplatelet therapy on stroke risk by brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases: subgroup analyses of the RESTART randomised, open-label trial

    Get PDF
    Background Findings from the RESTART trial suggest that starting antiplatelet therapy might reduce the risk of recurrent symptomatic intracerebral haemorrhage compared with avoiding antiplatelet therapy. Brain imaging features of intracerebral haemorrhage and cerebral small vessel diseases (such as cerebral microbleeds) are associated with greater risks of recurrent intracerebral haemorrhage. We did subgroup analyses of the RESTART trial to explore whether these brain imaging features modify the effects of antiplatelet therapy

    Favouritism: exploring the 'uncontrolled' spaces of the leadership experience

    Get PDF
    In this paper, we argue that a focus on favouritism magnifies a central ethical ambiguity in leadership, both conceptually and in practice. The social process of favouritism can even go unnoticed, or misrecognised if it does not manifest in a form in which it can be either included or excluded from what is (collectively interpreted as) leadership. The leadership literature presents a tension between what is an embodied and relational account of the ethical, on the one hand, and a more dispassionate organisational ‘justice’ emphasis, on the other hand. We conducted 23 semi-structured interviews in eight consultancy companies, four multinationals and four internationals. There were ethical issues at play in the way interviewees thought about favouritism in leadership episodes. This emerged in the fact that they were concerned with visibility and conduct before engaging in favouritism. Our findings illustrate a bricolage of ethical justifications for favouritism, namely utilitarian, justice, and relational. Such findings suggest the ethical ambiguity that lies at the heart of leadership as a concept and a practice
    corecore