Поиск по сайту




Пишите нам: info@ethology.ru

Follow etholog on Twitter

Система Orphus

Новости
Библиотека
Видео
Разное
Кросс-культурный метод
Старые форумы
Рекомендуем
Не в тему

список статей


The robustness of the "Raise-The-Stakes” strategy: Coping with exploitation in noisy Prisoner's Dilemma Games

Bram Van den Bergh, Siegfried Dewitte

1. Introduction

2. Material and methods

2.1. Participants

2.2. Procedure

2.3. Dependent variable

2.4. Conditions

2.4.1. Noise

2.4.2. Strategy

2.5. Design

3. Results

4. Discussion

Acknowledgment

References

Copyright

1. Introduction

The evolutionary origin of cooperative behavior is an enduring puzzle. For several decades, various disciplines have tried to answer why an organism performs an act that is costly to itself and beneficial to another (e.g., Axelrod, 1984, Axelrod & Dion, 1988, Fehr & Fischbacher, 2003, Gintis et al., 2003, Hamilton, 1964a, Hamilton, 1964b, Trivers, 1971). The Prisoner's Dilemma Game (PDG) is the prevailing and orthodox framework for studying the evolution of cooperation towards nonkin, but it suffers from a procedural flaw when compared with the real-life situations that it is designed to model. In a standard PDG, players face a binary choice: whether to cooperate or to defect. Cooperation is, however, seldom in an “all-or-nothing” case (Frean, 1996, Killingback et al., 1999, Wahl & Nowak, 1999a, Wahl & Nowak, 1999b). Individuals can decide about the extent of their cooperation. Bats may vary cooperativeness in terms of the quantity of a meal shared (Wilkinson, 1984), impala may vary grooming time (Hart & Hart, 1992), and fish may vary cooperation by increasing or decreasing the inspection distance of a predator (Milinski, Lüthi, Eggler, & Parker, 1997). In line with Roberts and Renwick (2003), we explore in this study how human participants gradually adjust the level of their cooperativeness throughout a series of interactions. We specifically investigate whether the strategy pursued by the opponent (subtle cheating vs. no cheating) affects cooperation and how this is moderated by communication noise.

If interactions with variable investment levels are considered, a new kind of cheating is possible because some cooperators may be less generous than others are (Sherratt & Roberts, 1998). Rather than a plain defection (in the dichotomous iterated PDG), individuals may invest a little less than their partners (in an iterated PDG that allows variable investment). In principle, this “subtle cheating” could gradually erode cooperation. Natural selection should, however, favor the ability to detect and discriminate against subtle cheaters (Trivers, 1971). Cosmides and Tooby argued that humans have evolved domain-specific cognitive capacities that allow them to detect cheaters (Cosmides, 1989, Cosmides & Tooby, 1992). There is even some evidence that cheaters may look different from cooperators (Yamagishi, Tanida, Mashima, Shimoma, & Kanazawa, 2003). The capacity to distinguish defectors and cooperators allows humans to select partners that will reciprocate.

How do evolutionary strategies (and hence human cooperativeness) survive the potentially destabilizing effects of this type of subtle exploitation? Roberts and Sherratt (1998) simulated interactions between a number of simple strategies (including cheater strategies) and found that cooperation emerged. It did so through a strategy of “Raise-The-Stakes” (RTS). RTS invests a minimal amount at first (“testing-the-water”) and then gradually increases the investment with a small amount in subsequent interactions, if and only if the partner matches or betters RTS's investment. RTS allows individuals to take advantage of cooperative opportunities while minimizing the risk of being exploited. Compare this strategy to the Tit-for-Tat (TFT) strategy that typically outperforms other strategies in the standard dichotomous PDG (Axelrod, 1984). TFT is less cautious on its initial move by starting with a cooperative choice. Such a strategy makes a giant “leap of faith” on the first move and is vulnerable to exploitation by cheaters in a PDG with variable investment.

Roberts and Renwick (2003) tested whether the RTS strategy is adopted by human participants in an iterated PDG with variable investment. The experiment supported the prediction of Roberts and Sherratt (1998) that investment in cooperation increases over the course of an interaction. Participants increased donations over successive rounds when the partner reciprocated the investment. On the first move, however, participants donated considerably more than the minimal amount required to “test the water”. This aspect of human behavior differed from the simulations but has an interesting implication. According to the simulations of Roberts and Sherratt (1998), RTS's major advantage over TFT was its low initial cooperativeness, which immunizes the RTS strategy against exploitation. Therefore, an important question is whether the “faithful” RTS strategy adopted by human participants is robust against subtle cheating. The question of what would happen when they are confronted with cheaters is legitimate, because natural populations are likely to contain individuals who do not cooperate (Doebeli et al., 2004, Kurzban & Houser, 2005, Sherratt & Roberts, 2001). Starting at an intermediate level makes the faithful RTS-like strategy vulnerable to exploitation. Killingback and Doebeli (1998) have questioned the robustness of RTS, and there has been no direct experimental test of whether the faithful RTS-like strategy adopted by the participants (Roberts & Renwick, 2003) is robust against subtle cheating.

Humans following a faithful RTS strategy have three options when subtly cheated on. They can either ignore the subtle cheating and maintain their cooperation level without raising the stakes, they can harshly punish by investing nothing on the subsequent round, or they can switch to TFT and match the partner's moves. Maintaining cooperation levels indefinitely leads to a large relative fitness disadvantage because of an infinitely enduring exploitation by the subtle cheater. Turning to noncooperation in reaction to a subtle defection has relatively high opportunity costs, but this “punishment” could turn some subtle cheaters into cooperators. Theory predicts, however, that a partner's low investment will be punished by matching the partner's choice (Killingback & Doebeli, 2002, Roberts & Sherratt, 1998, Sherratt & Roberts, 2002). Playing TFT signals that exploitation will be futile and, at the same time, reduces opportunity costs for the actor to a minimum. We therefore expect human participants to pursue a TFT-like strategy when confronted with a cheater, but an RTS strategy when confronted with a reciprocator.

The robustness of RTS is determined not only by the strategy pursued by the opponent, but also by situational and contextual variables. Errors are likely to occur in reality and a single mistake can be calamitous for reciprocal altruism (e.g., Van Lange, Ouwerkerk & Tazelaar, 2002). The escalation of retaliation among TFT players after an occasional mistake gradually breaks down cooperation. A good way to cope with noise is allowing some percentage of the other player's defections to go unpunished (Nowak & Sigmund, 1992, Nowak & Sigmund, 1993, Wu & Axelrod, 1995). Generosity might prevent a single error from eliciting a sequence of defections. Barrett (1999) and Cosmides and Tooby (in press) predict that people would be tolerant towards a nonintentional act of defection and make a distinction between an inadvertent mistake and intentional cheating. Interaction with a cheater strategy in a situation in which the perceived likelihood of mistakes is higher could therefore result in higher levels of cooperation, compared with a game in which noise is absent. Evolutionary simulations (Sherratt & Roberts, 1999), however, have demonstrated that RTS-like strategies tend to become less generous as the probability of mistakes increases. To disentangle the two different hypotheses, we explore the effect of noise on RTS strategies adopted by human participants. Based on modeling studies (Sherratt & Roberts, 1999), we predict that human participants will be less willing to raise stakes in noisy situations (both against subtle cheaters and noncheaters). Based on the cheater detection literature, we could expect human participants to display more generosity towards subtle cheaters as the perceived likelihood of inadvertent mistakes rises.

2. Material and methods

2.1. Participants

Students (N=57) from a large European university were recruited via the Internet and participated in an iterated PDG with variable investment (adapted from Roberts & Renwick, 2003, and Van Lange et al., 2002). They were informed that they participated in return for a participation fee (between €6 and €8.5) that varied with their payoff in the experimental game. This made sure that their decisions in the experiment were relevant. Five to eight participants attended each research session. The participants were seated in one of eight partially enclosed cubicles, which prevented them from communicating with each other.

2.2. Procedure

We used an iterated PDG that allowed for variable investment. The instructions were given by a computer program. In each round of the game, participants were allocated 10 coins, which they could either keep to themselves or donate to the other. Each coin kept was added to the participant's account. Each coin donated was doubled by the experimenter and added to the partner's account. Participants were informed about the partner's decisions prior to making a subsequent choice. After explaining the game, participants became acquainted with the choice procedure in five trials. These trials showed them that choices were made simultaneously. They were told that their PC would be connected with another PC on a random basis and that choices were made anonymously. Participants were unaware of the number of iterations that would be played.

2.3. Dependent variable

To investigate the prediction of Roberts and Sherratt (1998) that investment in cooperation increases over the course of an interaction, an RTS coefficient was computed for each participant, by correlating round number with the number of coins the participant gave to the partner. (We preferred the correlation coefficient over the slope because the correlation coefficient reflects the graduality and consistency of the rise, cf. raising the stakes, whereas the slope reflects the average increase). When participants did not vary cooperation (n=10) and, thus, a correlation coefficient could not be computed, an RTS coefficient=0 was assigned. Because the sampling distribution of Pearson's r is not normally distributed, a Fisher z′ transformation was employed to convert RTS coefficients to the normally distributed variable z′.

2.4. Conditions

2.4.1. Noise

In the noise condition, the computer “checked” whether the network connection between the cubicles was stable. Before the start of the iterated PDG, a pop-up window informed participants that choices would not always be correctly sent to the other cubicle (“due to a problem with the computer server”). The computers used were outdated and made this warning believable. The computer automatically checked the “server reliability” and informed the participants before the interaction in the PDG that mistakes could occur. Another pop-up window after Round 6 briefly interrupted the interaction between the two players. The interaction continued as soon as the participant clicked a button on the pop-up window. None of the participant's choices were actually changed. In the “no-noise” condition, the interaction was never interrupted and the interaction in the PDG was not preceded by a “warning about mistakes”.

2.4.2. Strategy

Participants played, unbeknownst to them, against a preprogrammed computer strategy. This procedure allowed us to record the variability in cooperation with a minimal amount of interference. Consistent with the conceptual definition of “Tit-For-Tat”, TFT was programmed to begin by giving six coins to the participant, a nice and cooperative choice (Van Lange et al., 2002). In subsequent rounds, TFT was programmed to give exactly the same number of coins that the participant had given in the previous interaction round. The “TFT-Minus-1” (TFT-1) strategy started by giving six coins on the first round and was programmed to subtract one coin from the participant's previous choice in the subsequent round.

2.5. Design

A within-subjects design (each participant subjected to each condition in a random order) was employed, with strategy and noise as two independent within-subject variables. When participants finished their first condition, they worked on a filler task until all participants had finished the first interaction phase, then went to another cubicle with a different computer (to avoid carryover effects with regard to the noise manipulation) and interacted in a second PDG with a different opponent. This procedure was repeated until all four conditions were finished. We included “order of conditions” in the analyses, but this variable was inconsequential for the results and is ignored henceforth.

3. Results

A 2 (Strategy)×2 (Noise) within-subjects analysis of variance revealed a main effect for strategy, F(1,168)=180.79, P<.0001. The positive correlation (M=0.57, Fisher transformed Pearson's correlation coefficient) between round number and level of cooperation among participants interacting with the TFT strategy indicates that participants increased cooperation against a strategy that matched the player's investment. The interaction with a TFT-1 strategy yielded negative correlations (M=−0.71) between round number and level of cooperation, which indicates that participants decreased cooperation over the course of an interaction with a strategy that undercut the player's level of investment. The analysis also revealed a significant main effect of noise, F(1,168)=4.78, P<.05. When in a noisy situation, participants were less likely to raise stakes (M=−0.17) than when noise was absent (M=0.03). Participants became less generous as the perceived probability of mistakes increased. The strategy-by-noise interaction did not yield a significant effect, F(1,168)=0.16, ns. Fig. 1 shows the median cooperation level across the 10 trials for the four conditions. Fig. 2 shows the average correlations between round number and cooperation in the four conditions.


View full-size image.

Fig. 1. Amounts donated per round in the four conditions: (A) TFT and no noise; (B) TFT and noise; (C) TFT-1 and no noise; (D) TFT-1 and noise. Data are plotted as medians across 57 participants, with 25th and 75th percentiles.



View full-size image.

Fig. 2. RTS coefficients for all conditions. Bars represent the mean normalized correlation between round number and amount donated; error bars are standard errors.


Additionally, we compared the total amount of coins gathered throughout the experiment by the participant versus the preprogrammed computer strategy. Playing against TFT, participants (M=162.6) earned significantly less than the opponent did (M=165.1), t(113)=−2.76, P<.01. In contrast, playing against TFT-1, participants (M=131.9) did not earn less than the opponent did (M=131.1), t(113)=0.88, ns.

4. Discussion

In line with the findings of Roberts and Renwick (2003), we found that cooperation increased throughout the interaction when participants' investments were matched. Participants raised the stakes when they played against a preprogrammed computer strategy that gave exactly the same number of coins that the participant gave in the previous interaction round (i.e., TFT). However, if participants employ RTS against a matching strategy, they receive a small sucker's payoff for each rise in cooperation and will gain less than the preprogrammed strategy. RTS is thus “willing to be exploited” for eventual greater gains.

While previous research has shown that RTS is capable of taking advantage of cooperative opportunities, RTS's reaction to defectors is understudied. This question is relevant because humans seem to adopt a faithful RTS strategy: They start at an intermediate level, which makes them vulnerable to exploitation. Populations are likely to contain a class of individuals that temporarily or permanently lack the time, energy, resources, ability, or willingness to cooperate (Sherratt & Roberts, 2001). Theory predicts that low investments of the partner are punished by matching the partner's level of cooperation (Sherratt & Roberts, 2002). Such a transformation of RTS into a TFT-like strategy when human participants interact with a “subtle cheater” has, to our knowledge, not been studied yet. We found that cooperation decreased gradually throughout the interaction whenever the partner undercut the cooperation of the participant. Rather than investing zero or maintaining an intermediate level of cooperation when confronted with a cheater, participants adjusted their level of cooperation gradually downwards. This shift in strategy suggests that participants discriminate cheaters and reciprocators, as proposed by Cosmides and Tooby (1992) and Trivers (1971). Indeed, on average, human participants are not exploited by a subtle cheater, whereas they increase cooperation against a cooperator. RTS is a robust strategy.

In our experiment, we simulated network problems with the computers on which the experimental task was completed. Modeling studies (Sherratt & Roberts, 1999) suggest that individuals become less generous as the probability of mistakes increases. Cheater detection theory, in contrast, suggests that noise would reduce the negative effects of subtle cheating strategies. In this way, humans would suffer lower opportunity costs. Our data are in line with the first prediction. As participants perceived a greater likelihood of mistakes, they were less willing to raise stakes against a reciprocator and less willing to give a cheater a new chance. In line with Sherratt and Roberts (1999), we interpret the noncooperative behavior in noisy situations as participants being more cautious when the perceived probability of errors rises. It seems as if the potential exploitation costs outweigh the opportunity costs in noisy situations.

In contrast with the initial stimulations Roberts and Sherratt (1998), but consistent with Robert and Renwick (2003) and Van Lange et al. (2002), the initial move in the PDG was substantially higher than the minimal amount needed to “test-the-water”. RTS-like strategies forego and delay potential gains by investing only a small amount on the first move, whereas TFT-like strategies are vulnerable to exploitation by starting out with a cooperative move. The potential cost of an initial cooperative move is underestimated in a standard PDG but could be quite substantial in a nondiscrete PDG. A strategy cancelling out the opportunity cost carried by RTS strategies and minimizing the risk of exploitation carried by TFT strategies should start out with a moderately cooperative move. Initial moves of human participants could be the result of a combination of concerns about losing gains (i.e., opportunity costs) and exploitation, and this could explain the moderately high levels of cooperation on the first round. A strategy that is informed about the probabilities of meeting a cheater versus a reciprocator and adapt its initial move to these probabilities might be more successful than low starting RTS or high starting TFT strategies. Recent experimental evidence has shown that people do adjust cooperation according to their expectations (Smeesters, Warlop, Van Avermaet, Corneillie, & Yzerbyt, 2003).

In the present study, the opponent was programmed to play TFT. But as people adopt an RTS strategy, the question arises to what extent human participants double their stakes when they interact with RTS, creating an even more rapid upward spiral of increasing cooperation. Some cooperators, however, may be less generous than others (Sherratt & Roberts, 1998), and participants might differ in their willingness to raise stakes: Personality variables, such as social value orientation (e.g., Van Lange, 1999), could explain interindividual differences in the willingness to raise the stakes when the opponent matches. Additionally, future research could investigate whether other situational factors might affect the adoption of RTS. Participants knowing that their choices in a PDG will be made public at the end of the game might act more generously than participants whose choices are made anonymously. Participants may perceive that being cooperative would enhance one's reputation (e.g., Barclay, 2004, Nowak & Sigmund, 1998). The possibility of indirect reciprocity might boost RTS. Finally, future research could explore why situational noise does not buffer the negative effects, as cheater detection theory suggests. Possibly, noise buffers a decrease in cooperation only when it is even more subtle. Occasional undercutting of the opponent's contribution rather than continuous undercutting might provide a fairer test of the predictions derived from the cheater detection theory.

Acknowledgments

We thank Bram Van Moorter for help with the experimental design, and two anonymous reviewers, Barbara Briers and Kobe Millet, for their very helpful comments on an earlier draft of this manuscript. The research was supported by grant G.0391.03 from the Fund for Scientific Research, Flanders. Financial support by Censydiam is gratefully acknowledged.

References

Axelrod, 1984 1.Axelrod R. The evolution of cooperation. New York: Basic Books; 1984;.

Axelrod & Dion, 1988 2.Axelrod R, Dion D. The further evolution of cooperation. Science. 1988;242:1385–1390.

Barclay, 2004 3.Barclay P. Trustworthiness and competitive altruism can also solve the “tragedy of the commons”. Evolution and Human Behavior. 2004;25:209–220.

Barrett, 1999 4.Barrett HC. Guilty minds: How perceived intent, incentive, and ability to cheat influence social contract reasoning. In: Paper presented at the 11th Annual Meeting of the Human Behavior and Evolution Society Salt Lake City, Utah. 1999;.

Cosmides, 1989 5.Cosmides L. The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition. 1989;31:187–276. MEDLINE | CrossRef

Cosmides & Tooby, 1992 6.Cosmides L, Tooby J. Cognitive adaptations for social exchange. In:  Barkow JH,  Cosmides L,  Tooby J editor. The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press; 1992;p. 163–228.

Cosmides & Tooby, in press 7.Cosmides, L., & Tooby, J. (in press). Neurocognitive adaptations designed for social exchange. In D. M. Buss (Ed.), Evolutionary Psychology Handbook. New York:Wiley.

Doebeli et al., 2004 8.Doebeli M, Hauert C, Killingback T. The evolutionary origin of cooperators and defectors. Science. 2004;306:859–862. CrossRef

Fehr & Fischbacher, 2003 9.Fehr E, Fischbacher U. The nature of human altruism. Nature. 2003;425:785–791. CrossRef

Frean, 1996 10.Frean MR. The evolution of degrees of cooperation. Journal of Theoretical Biology. 1996;82:549–559.

Gintis et al., 2003 11.Gintis H, Bowles S, Boyd R, Fehr E. Explaining altruistic behavior in humans. Evolution and Human Behavior. 2003;24:153–172.

Hamilton, 1964a 12.Hamilton WD. The genetical evolution of social behaviour I. Journal of Theoretical Biology. 1964;7:1–16. MEDLINE | CrossRef

Hamilton, 1964b 13.Hamilton WD. The genetical evolution of social behaviour II. Journal of Theoretical Biology. 1964;7:17–52. MEDLINE | CrossRef

Hart & Hart, 1992 14.Hart BL, Hart LA. Reciprocal allogrooming in impala. Aepyceros melampus. Animal Behaviour. 1992;44:1073–1083. CrossRef

Killingback & Doebeli, 1998 15.Killingback T, Doebeli M. Raise the stakes evolves into a defector. Nature. 1998;400:518. MEDLINE | CrossRef

Killingback & Doebeli, 2002 16.Killingback T, Doebeli M. The continuous prisoner's dilemma and the evolution of cooperation through reciprocal altruism with variable investment. American Naturalist. 2002;160:421–438. CrossRef

Killingback et al., 1999 17.Killingback T, Doebeli M, Knowlton N. Variable investment, the continuous prisoner's dilemma, and the origin of cooperation. Proceedings of the Royal Society of London (B). 1999;266:1723.

Kurzban & Houser, 2005 18.Kurzban R, Houser D. Experiments investigating cooperative types in humans: A complement to evolutionary theory and simulations. Proceedings of the National Academy of Sciences. 2005;102(5):1803–1807.

Milinski et al., 1997 19.Milinski M, Lüthi JH, Eggler R, Parker GA. Cooperation under predation risk: Experiments on costs and benefits. Proceedings of the Royal Society of London (B). 1997;264:831–837.

Nowak & Sigmund, 1992 20.Nowak MA, Sigmund K. Tit for tat in heterogeneous populations. Nature. 1992;355:250–253. CrossRef

Nowak & Sigmund, 1993 21.Nowak MA, Sigmund K. A strategy of win–stay, lose–shift that outperforms tit for tat in prisoner's dilemma. Nature. 1993;364:56–58. MEDLINE | CrossRef

Nowak & Sigmund, 1998 22.Nowak MA, Sigmund K. Evolution of indirect reciprocity by image scoring. Nature. 1998;393:573–577. MEDLINE | CrossRef

Roberts & Renwick, 2003 23.Roberts G, Renwick J. The development of cooperative relationships: An experiment. Proceedings of the Royal Society of London (B). 2003;270:2279–2284.

Roberts & Sherratt, 1998 24.Roberts G, Sherratt TN. Development of cooperative relationships through increasing investment. Nature. 1998;394:175–179. MEDLINE | CrossRef

Sherratt & Roberts, 1998 25.Sherratt TN, Roberts G. The evolution of generosity and choosiness in cooperative exchanges. Journal of Theoretical Biology. 1998;193:167–177. MEDLINE | CrossRef

Sherratt & Roberts, 1999 26.Sherratt TN, Roberts G. The evolution of quantitatively responsive cooperative trade. Journal of Theoretical Biology. 1999;200:419–426. MEDLINE | CrossRef

Sherratt & Roberts, 2001 27.Sherratt TN, Roberts G. The role of phenotypic defectors in stabilizing reciprocal altruism. Behavioral Ecology. 2001;12:313–317.

Sherratt & Roberts, 2002 28.Sherratt TN, Roberts G. The stability of cooperation involving variable investment. Journal of Theoretical Biology. 2002;215:47–56. MEDLINE | CrossRef

Smeesters et al., 2003 29.Smeesters DHRV, Warlop L, van Avermaet E, Corneille O, Yzerbyt V. Do not prime hawks with doves: The interplay of construct activation and consistency of social value orientation on cooperative behavior. Journal of Personality and Social Psychology. 2003;84:972–987. MEDLINE | CrossRef

Trivers, 1971 30.Trivers RL. The evolution of reciprocal altruism. Quarterly Review of Biology. 1971;46:35–57. CrossRef

Van Lange, 1999 31.Van Lange PAM. The pursuit of joint outcomes and equality in outcomes: An integrative model of social value orientation. Journal of Personality and Social Psychology. 1999;77:337–349. CrossRef

Van Lange et al., 2002 32.Van Lange PAM, Ouwerkerk J, Tazelaar M. How to overcome the detrimental effects of noise in social interaction: The benefits of generosity. Journal of Personality and Social Psychology. 2002;82:768–780. MEDLINE | CrossRef

Wahl & Nowak, 1999a 33.Wahl L, Nowak M. The continuous prisoner's dilemma: I. Linear reactive strategies. Journal of Theoretical Biology. 1999;200:307–321. MEDLINE | CrossRef

Wahl & Nowak, 1999b 34.Wahl L, Nowak M. The continuous prisoner's dilemma: II. Linear reactive strategies with noise. Journal of Theoretical Biology. 1999;200:323–338. MEDLINE | CrossRef

Wilkinson, 1984 35.Wilkinson G. Reciprocal food sharing in vampire bats. Nature. 1984;308:181–184. CrossRef

Wu & Axelrod, 1995 36.Wu J, Axelrod R. How to cope with noise in the iterated prisoner's dilemma. Journal of Conflict Resolution. 1995;39:183–189.

Yamagishi et al., 2003 37.Yamagishi T, Tanida S, Mashima R, Shimoma E, Kanazawa S. You can judge a book by its cover: Evidence that cheaters may look different from cooperators. Evolution and Human Behavior. 2003;24:290–301.

Faculty of Economics, Katholieke Universiteit Leuven, Naamsestraat 69, Leuven B-3000, Belgium

Corresponding author. Tel.: +32 16 326947; fax: +32 16 326732

PII: S1090-5138(05)00038-3

doi:10.1016/j.evolhumbehav.2005.04.006



2007:12:08