Original Article
The psychology of deterrence explains why group membership matters for third-party punishment

https://doi.org/10.1016/j.evolhumbehav.2017.07.003Get rights and content

Abstract

Humans regularly intervene in others' conflicts as third-parties. This has been studied using the third-party punishment game: A third-party can pay a cost to punish another player (the “dictator”) who treated someone else poorly. Because the game is anonymous and one-shot, punishers are thought to have no strategic reasons to intervene. Nonetheless, punishers often punish dictators who treat others poorly. This result is central to a controversy over human social evolution: Did third-party punishment evolve to maintain group norms or to deter others from acting against one's interests? This paper provides a critical test. We manipulate the ingroup/outgroup composition of the players while simultaneously measuring the inferences punishers make about how the dictator would treat them personally. The group norm predictions were falsified, as outgroup defectors were punished most harshly, not ingroup defectors (as predicted by ingroup fairness norms) and not outgroup members generally (as predicted by norms of parochialism). The deterrence predictions were validated: Punishers punished the most when they inferred that they would be treated the worst by dictators, especially when better treatment would be expected given ingroup/outgroup composition.

Introduction

We are often opinionated about others' conflicts and occasionally even intervene. From Twitter wars raging around a celebrity's infidelity, to boycotts of businesses, states, or entire countries for their treatment of sexual minorities, to the good Samaritan detaining a mugger trying to make off with a stolen purse, third-parties are often provoked by the bad actions of others.

In humans, researchers have usually studied one particular type of such third-party intervention: third-party punishment. Third-party punishment involves third parties punishing someone for treating another person poorly (Fehr & Fischbacher, 2004). Third-party punishment has been seen in industrialized societies, in small-scale societies, in both laboratory experiments and field experiments, and among children as young as 6 (Fehr et al., 2002, Henrich et al., 2010, Kurzban et al., 2007, McAuliffe et al., 2015).

Third-party punishment is also a group-based phenomenon (McAuliffe & Dunham, 2016). People often punish more when the victimizer is an outgroup member or when the victim is an ingroup member (Bernhard et al., 2006, Lieberman and Linke, 2007). Group-based third-party punishment occurs both for real-world groups and for artificial laboratory groups (Goette et al., 2006, Jordan et al., 2014, Schiller et al., 2014). But, why does ingroup/outgroup status matter for third-party punishment?

Different theories of third-party punishment make different predictions about why group membership should matter. One theory, group norm maintenance theory, suggests that people engage in third-party punishment to enforce ingroup norms. Group norm researchers have primarily studied two such norms. The norm of fairness requires ingroup members to split resources fairly with other ingroup members. The norm of parochialism requires that ingroup members treat outgroup members poorly when possible. Another theory, deterrence theory, suggests that people engage in third-party punishment as the output of a cue-driven, evolved psychology designed to deter poor treatment of oneself and one's allies. Deterrence theory suggests that one driver of punishment is the inferences punishers draw: Punishers should punish more when they infer that poor treatment of third parties reflects a disposition by the actor to treat the self or valued others poorly.

Despite the differences between the theories, testing between them has proved difficult and only a few studies have attempted to do so (Bone et al., 2014, Jordan et al., 2016, Jordan et al., 2014, Krasnow et al., 2012, Krasnow et al., 2016). The goal of the present study is to investigate how differential group membership affects third party punishment by observing the inferences punishers draw from dictator behavior. If the deterrence view is correct, group membership should matter because of how it changes the inference punishers draw about how the dictator would treat them or those they value personally. For example, based on seeing an outgroup dictator treat an ingroup member poorly a punisher should infer that the dictator will also treat her poorly; such cases license the inference that the poor treatment was due to the victim's group membership, a property which the punisher shares, causing the inference to generalize. In contrast, this inference should be much weaker when seeing an ingroup dictator treat an outgroup member poorly. If the group norm view regarding the fairness norm is correct, group membership should matter because the fairness norm most properly applies to behavior within the group. If the parochialism norm is operative, we should expect general poor treatment of outgroup members. Notably, neither norm specifies how punishment should relate to inferred personal treatment, particularly in contrast to inferred treatment of others. We elaborate on these theories below.

One class of theories explains third-party punishment as flowing from a human ability to create and maintain group norms. On this group norm maintenance view, humans have an evolved psychology designed to acquire social norms from the local social environment, act on them, and enforce them in others (Chudek and Henrich, 2011, Fehr and Fischbacher, 2004, Henrich et al., 2006, Henrich et al., 2010, Richerson and Boyd, 2005). A social norm is a learned rule that specifies both an action to be taken (or not) and simultaneously specifies punishment of people who do not obey the norm.

Norms are shared within groups, but might differ between groups—they are rules applied by a community on people within the community. This is important for making concrete predictions from group norm maintenance theory. As Chudek and Henrich (2011, p. 218) write, “By norms, we mean learned behavioral standards shared and enforced by a community.” Again illustrating that norms are an ingroup phenomenon, Richerson and Boyd (2005, p. 219) write that humans “are inclined to punish fellow group members who violate social norms, even when such punishment is costly.” A given norm, whatever it is, regulates behavior within a community. By punishing people who violate a norm, punishment has at least two effects: changing the norm violator so they follow the norm in the future and cueing other members of the group that norm violations will be punished.

Group norm maintenance theory also holds that people enforce norms regardless of personal benefits—punishing a norm breaker need not be in service of any anticipated direct benefits from punishing. This feature is often called “strong reciprocity” (Gintis, 2000). As Fehr and Henrich (2003, p. 57 emphasis original) write, “The essential feature of strong reciprocity is a willingness to sacrifice resources in … punishing unfair behavior, even if this is costly and provides neither present nor future economic rewards for the reciprocator.”

There are many variations on group norm maintenance theory and many potential norms. A single paper cannot possibly investigate them all. Instead, we focus on the most prominent version of the theory—cultural group selection—and the most commonly studied norms—fairness and parochialism. On theories of culture group selection, virtually any norm is possible. This is because, on this theory, norm psychology uses moralistic punishment: not only are people who break the norm punished, but people who do not punish norm breakers are also punished (and, in principle, people who do not punish those who do not punish are punished, ad infinitum). Moralistic punishment can sustain any norm, even ones deleterious for the group or individual (Boyd & Richerson, 1992). So, if a group norm specifies burning down group members' homes, people who do not commit arson should be punished. Moreover, people who commit arson but do not punish non-arsonists should also be punished (and up through higher levels).

Although any norm, useful or harmful, is possible, cultural group selection theory holds that the distribution of norms will not be random. Instead, group-beneficial norms should tend to predominate. In part, this is because a process of cultural selection happens between groups. Groups with norms favoring ingroup prosociality will tend to replace groups without such norms. This could happen because groups with more effective norms grow and reproduce faster or survive longer than other groups (Boyd, Gintis, Bowles, & Richerson, 2003). Or such norms could allow an ingroup to directly compete with outgroups, such as in war, and thereby replace those outgroups (Choi and Bowles, 2007, Gintis, 2000). This does not necessarily require that individual group members be killed; merely that members of dissipated groups join more effective groups or adopt their norms (Chudek and Henrich, 2011, Richerson and Boyd, 2005).

By far the most commonly studied potential norm is the fairness norm (Fehr and Fischbacher, 2004, Fehr et al., 2002, Henrich et al., 2006, Henrich et al., 2010). This norm specifies that ingroup members should treat each other “fairly,” typically meaning that a windfall gain should be split (more or less) evenly. For instance, if an experimenter randomly hands one subject of a pair $10, then the fairness norm specifies that this subject should give $5 to the other subject. Fairness norms have been suggested to underpin the amazing economic success of Western cultures (Henrich et al., 2010).

The other most consistently studied norm is parochialism (Choi & Bowles, 2007). Parochialism is often conceptualized as having two components: ingroup altruism or fairness (essentially the fairness norm discussed above) and outgroup aggression, spite, or derogation (Rusch, 2014). Parochialism's norm of outgroup derogation requires that ingroup members hurt, injure, or otherwise inflict costs on outgroup members when possible. (From this point on, whenever we refer to “parochialism,” we will be referring to the outgroup derogation side.) Because norms are about ingroup members regulating other ingroup members' behavior, parochialism is not a norm that specifies how outgroup members should behave. Instead, the parochialism norm specifies how ingroup members should be behave towards outgroup members.

Proponents of the view that human third-party punishment flows from group norm maintenance point to a number of sources of evidence (Richerson et al., 2016). A chief source of evidence is that third-party punishment occurs when punishers cannot seemingly expect any material returns. For instance, third-parties will punish in anonymous, one-shot laboratory experiments. Typically, these experiments involve the third-party punishment game. One player, the dictator, is given (e.g.) a $10 stake. The dictator can divide the stake any way she sees fit between herself and another player, the recipient. The recipient has no say over this allocation. Finally, a third player, the punisher, knows how much the dictator allocated to the recipient. The punisher has a separate stake of (e.g.) $5 and can spend it to reduce the dictator's earnings. Dictators are aware in advance that punishers exist and can punish.

Because the experiment is one-shot and anonymous, punishers have no material incentive to punish: They do not know the recipient's or dictator's identity, nor will punishers knowingly interact with either again. Thus, punishers have no strategic reasons to spend on punishment. Dictators, realizing this, have no material incentive to give anything to recipients for similar reasons. Nonetheless, people regularly punish in these experiments (Fehr and Fischbacher, 2004, Fehr et al., 2002, Goette et al., 2006, Henrich et al., 2010, Jordan et al., 2015, Jordan et al., 2014, Jordan et al., 2016, Krasnow et al., 2016, McAuliffe et al., 2015, Schiller et al., 2014).

On group norm maintenance theory, this reveals that third-party punishment has been organized to maintain group norms. Consistent with this account, third parties punish more when an ingroup member treats another ingroup member poorly than when an outgroup member treats another outgroup member poorly (Bernhard et al., 2006); this follows because norms regulate ingroup members' behavior, not outgroup members' behavior.

A different perspective holds that third-party punishment flows from an evolved deterrence psychology designed to deter poor treatment of oneself or valued others (Krasnow et al., 2012, Krasnow et al., 2016, Lieberman and Linke, 2007, McCullough et al., 2013, Sell et al., 2009). Many resources are rivalrous. This leads to conflicts over who will consume such resources—conflicts of interests. Animals often defend their interests with force, or threats of force, and anticipate that others will do the same. But there will usually be uncertainty about what another animal's interests are and how strongly that animal is able to defend them. Thus, miscoordinations will be common, for example, between how much someone respects your interests and the level of respect you feel entitled to.

The existence of others with a disposition to act against your interests is an adaptive problem; should they continue to act as they have they will continue to impose fitness costs on you. Getting them to improve their behavior requires recalibrating representations in their mind, for example representations about your ability to use force to defend your interests. Many animals use punishment or the threat of punishment to change the behavior of others (Clutton-Brock and Parker, 1995, Raihani et al., 2012). Punishment among animals straightforwardly increases inclusive fitness by causing the punisher to be treated better, or causing close kin to be treated better. For instance, a mother might use threats or aggression to drive predators away from her offspring.

Unlike many animals, however, humans create enduring friendships, alliances, and coalitions. Can deterrence straightforwardly extend to these cases? We believe it can. Friends are often irreplaceable and effective coalitions are often difficult to recreate. These features make them intrinsically valuable, much in the same way genetic relatives are intrinsically valuable (Tooby and Cosmides, 1996, Tooby et al., 2006), making it valuable to punish on their behalf (Lieberman and Linke, 2007, Roos et al., 2014). Moreover, friends and allies may reciprocate deterrence: I help deter poor treatment of you now, you help me in the future. Just as it is easy to understand why pastoralists would deter poachers from stealing their animals, it is easy to understand why people would deter poor treatment of their valuable relationship partners. This predicts that humans should, at least in some instances, engage in group-based third-party punishment: If I benefit by the continued existence of a strong coalition, I am directly incentivized to defend its interests.

Punishment can also defend or secure the punisher's own reputation to bystanders. Such punishment could signal to observers who are not part of the dispute that the punisher is willing to enforce her interests (Krasnow et al., 2016) or has other valuable traits (Kurzban et al., 2007), such as trustworthiness (Jordan et al., 2016).

How does deterrence psychology work? At the deep level of evolutionary game theory, the potential cost of being punished must outweigh the benefits of treating others poorly; otherwise, continued poor treatment would still be best. Proximally, however, any given act of punishment or sanctioning is not usually deterring through its effect on immediate payoffs (Ostrom, 1998). Instead, it functions as a signal to the malefactor that such a behavior must stop, or else later sanctions will be more severe. Moreover, this signaling logic is not unique to theories of deterrence in humans. Bluffs and ritualized strength displays are common across animals.

Punishment as signaling is consistent with recent evolutionary models of anger (McCullough et al., 2013, Sell, 2011, Sell et al., 2009). Typically, the expression of anger is not costly aggression, but talking or arguing with the person who caused the anger, to get them to change their behavior (Averill, 1983). Similar results obtain in political science and economic research. Based on substantial fieldwork, Ostrom (1998, p. 8) writes that punishment involves “graduated sanctions for enforcing compliance” and that “by paying a modest fine, they [malefactors] rejoin the community in good standing and learn that rule infractions are observed and sanctioned.” This is consistent with experimental economic research showing that purely nominal “disapproval points,” which were cost-free to give and receive, were nearly as effective as costly punishment in maintaining cooperation in a public goods game (Masclet, Noussair, Tucker, & Villeval, 2003). To accomplish its deterrence function, punishment or sanctioning must change the malefactor's behavior; it is not strictly required that the malefactor be damaged—the threat of future aggression or withdrawal of benefits can be sufficient to change behavior.

Of course, talk is cheap and bad actors should not find threats universally credible. When bad actors appear to need a bigger signal to change their behavior, people switch to cost-imposing deterrence: “Repeated rule breakers are severely sanctioned and eventually excluded from the group” (Ostrom, 1998, p. 8). Alternatively, if bad behavior signals a severe enough disposition towards future bad behavior, punishment may immediately escalate to higher levels of severity (Kurzban and DeScioli, 2013, McCullough et al., 2013, Sell et al., 2009). Although in many cases punishment or sanctioning might be a pure signal—that is, merely a threat of future harm or withdrawal of benefits—in more extreme cases it can involve immediate costs. Thus, a key prediction of deterrence theory is that the greater disparity between how much you infer someone values you (given how they acted) and what you feel entitled to from them, the greater the punishment that is predicted (Kurzban and DeScioli, 2013, McCullough et al., 2013, Sell et al., 2009).

Generally, in the third-party punishment game, punishers do not have access to how they would be personally treated by the dictator. But they can infer this disposition based on dictators' treatment of the recipient. In support of this a previous study showed that punishers use dictators' treatment of recipients to infer how dictators would treat the punishers themselves (Krasnow et al., 2016). This inference was ecologically valid: Dictators' treatment of recipients predicted their treatment of punishers.

Deterrence theory, therefore, offers two mechanisms whereby third-party punishment would be differentiated by group membership. First, because ingroup members are intrinsically valuable, people should be more willing to punish on their behalf compared to outgroup members. Second, the relative group membership should change the inference the punisher makes about how much the dictator values the punisher based on how the dictator treats the recipient. If an ingroup member has been mistreated, you can more reasonably infer that this mistreatment would extend to you if the culprit was an outgroup member. This inference in and of itself should license punishment. In contrast, if an ingroup member mistreats an outgroup member, that likely does not predict how you will be treated by that ingroup member.

But how does a deterrence account explain the apparent irrationality of one-shot, anonymous third-party punishment? After all, in an anonymous, one-shot game punishment can have no rational deterrent effect. Deterrence theory assumes that, in the small-scale social worlds of human ancestors, a person who treats someone else poorly now might later treat you, your kin, or your allies poorly in the future (Krasnow et al., 2016). Much like craving sugar-rich foods was adaptive in the past, but may be harmful in abundant modern environments, an evolved punishment psychology may treat the anomalous situations of anonymous, one-shot laboratory games as if they represented more typical conditions where relationships and reputations persist over time (Delton et al., 2011, Hagen and Hammerstein, 2006, Krasnow et al., 2013, West et al., 2007). Deterrence theory does not argue that all third-party punishment will be beneficial, nor that it was always beneficial in the past. Rather, the argument is that because the long-run average returns from attempting to deter bad treatment were positive, our present psychology bears this design.

In our view, deterrence theory is more parsimonious than group norm maintenance theory (see the Discussion). Thus, even if both theories' predictions and explanatory power entirely overlapped, we believe that deterrence theory should be favored. However, we also believe the two theories can be empirically distinguished and doing so is more productive than mere argument. After describing our methods, we lay out specific predictions from the theories.

Section snippets

Participants

We analyzed data from 275 punishers (39% women; 60% liberal) and 303 dictators (44% women; 60% liberal) recruited from Amazon Mechanical Turk (Mturk), an online labor force. Participants came from Mturk's US worker pool. Responses to economic games played on Mturk are similar to responses in laboratory settings (Horton, Rand, & Zeckhauser, 2011), including in the third-party punishment game (Krasnow et al., 2016). All participants received $0.50 merely for playing and earned bonuses based on

Basics of division and punishment

As in past research on the third-party punishment game, dictators sometimes gave money to recipients in the division task: 61% transferred at least $0.25 and 37% transferred half of the stake (Fig. 2). Dictators valued both recipients and punishers at about 0.27, meaning dictators would forgo up to $0.27 to give $1.00 to punishers and recipients.

Punishers sometimes punished: For example, when dictators transferred nothing, 36% spent some money on punishment and 13% spent their complete $0.50

Discussion

Laboratory third-party punishment, and the real-world processes it is meant to model, often depend on group membership. In unframed laboratory games, subjects punish the mistreatment of others differently when the offender (or victim) is ingroup or outgroup. In the real world, our moral sentiments are not identically engaged when a community member is victimized (e.g., outrage following the San Bernardino Massacre) as when these victims are foreign citizens a world away (e.g., apathy to the

Conclusion

Understanding deterrence psychology is only possible by taking the psychology of small-scale social life seriously. The last two decades have seen waves of research purporting to rule out the possibility that this kind of psychology—a psychology concerned with costs and benefits reliably present in the small-scale social world of the human past—could explain third-party punishment. The experimental protocols of one-shot, anonymous interactions were meant to rule out the possibility that this

References (50)

  • J.R. Averill

    Studies on anger and aggression: Implications for theories of emotion

    American Psychologist

    (1983)
  • H. Bernhard et al.

    Parochial altruism in humans

    Nature

    (2006)
  • J. Bone et al.

    Defectors, not norm violators, are punished by third-parties

    Biology Letters

    (2014)
  • R. Boyd et al.

    The evolution of altruistic punishment

    Proceedings of the National Academy of Sciences of the United States of America

    (2003)
  • J.-K. Choi et al.

    The coevolution of parochial altruism and war

    Science

    (2007)
  • T.H. Clutton-Brock et al.

    Punishment in animal societies

    Nature

    (1995)
  • A.W. Delton et al.

    Evolution of fairness: Rereading the data

    Science

    (2010)
  • A.W. Delton et al.

    The evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters

    Proceedings of the National Academy of Sciences of the United States of America

    (2011)
  • E. Fehr et al.

    Strong reciprocity, human cooperation, and the enforcement of social norms

    Human Nature

    (2002)
  • E. Fehr et al.

    Is strong reciprocity a maladaptation? On the evolutionary foundations of human altruism

  • L. Goette et al.

    The impact of group membership on cooperation and norm enforcement: Evidence using random assignment to real social groups

    The American Economic Review

    (2006)
  • J. Henrich et al.

    Markets, religion, community size, and the evolution of fairness and punishment

    Science

    (2010)
  • J. Henrich et al.

    Costly punishment across human societies

    Science

    (2006)
  • J.J. Horton et al.

    The online laboratory: Conducting experiments in a real labor market

    Experimental Economics

    (2011)
  • J.J. Jordan et al.

    Third-party punishment as a costly signal of trustworthiness

    Nature

    (2016)
  • Cited by (0)

    The raw data for this paper can be found on the Open Science Framework at: https://osf.io/6gyyd/.

    1

    Both authors contributed equally to this manuscript.

    View full text