Both cooperation and competition are integral aspects of scientific interaction. Joint projects combine diverse, specialized expertise to promote research success. For many scientists, competition provides a motivation to excel. This drive to win is particularly effective for those researchers who can pace themselves, putting out a burst of extra effort on those occasions when it can make the decisive difference between being a discover and being a confirmer of others’ discoveries.
The choice between scientific cooperation and competition is a daily one, involving conscious or unconscious decisions on style of interactions with scientific peers. Most scientists simplify this decision-making by adopting a strategy that provides the decision. Perhaps the strategy is to cooperate with all other scientists; perhaps it is to compete with everyone over everything. More likely, the individual always cooperates with a few selected scientists and competes with others. Whatever viable strategy is selected, we should recognize its consequences.
The survival value of successful competition is almost an axiom of evolutionary theory. Why, then, has cooperation survived evolutionary pressure, in humans as well as in many other species? Kinship theory is the usual explanation. According to kinship theory, a genetically influenced strategy such as cooperation is evolutionarily viable if it helps a substantial portion of one’s gene pool to survive and reproduce, even if the cooperator dies. A classic example is the sterile worker honeybee, which commits suicide by stinging. Altruism of parents for offspring is easy to explain, but kinship theory also successfully predicts that altruism would be high among all members of an immediate family and present throughout an inbred tribe. Sacrifice for an unrelated tribe member may improve future treatment of one’s children by tribe members.
Modified kinship theory can account for many manifestations of cooperation and competition among scientists. Anus/them perspective can be developed among members of a company, university, or research group. Thus a member of a National Science Foundation proposal-review panel must leave the room whenever a proposal from their home institution is under discussion. Here the health or reputation of an institution is an analogue for genetic survival. Similarly, a clique of scientists with the same opinion on a scientific issue may cooperate to help defeat a competing theory.
For scientists facing the decision of cooperation or competition with a fellow scientist, kinship theory is not a particularly useful guide. A more helpful perspective is provided by the concept of an evolutionarily stable cooperation/competition strategy. Evolution of a cooperation/competition strategy, like other genetic and behavioral evolutions, is successful only if it fulfills three conditions :
Initial viability. The strategy must be able to begin by gaining an initial foothold against established strategies.
Robustness. Once established, the strategy must be able to survive repeated encounters with many other types of strategy.
Stability. Once established, the strategy must be able to resist encroachment by any new strategy.
Axelrod and Hamilton [1981] evaluated these three criteria for many potential cooperative/competitive strategies by means of the simple game of Prisoner’s Dilemma [Rapoport and Chammah, 1965]. At each play of the game, two players simultaneously choose whether to cooperate or defect. Both players’ payoffs depend on comparison of their responses:
When the game ends after a certain number of plays (e.g., 200), one wants to have a higher score than the opponent. But even more crucial if the game is to be an analogue for real-life competition and cooperation, one seeks the highest average score of round-robin games among many individuals with potentially varied strategies.
The optimum strategy in Prisoner’s Dilemma depends on both the score assignments and the number of plays against each opponent. The conclusions below hold as long as:
S < N < R < C, i.e., my defection pays more than cooperation on any one encounter, and cooperation by the opponent pays more to me than his or her defection does;
R > (C+S)/2, i.e., cooperation by both pays more than alternating exploitation; and
I neither gain nor lose from my opponent’s scoring (e.g., if I were to gain even partially from his gains, then continuous cooperation would be favored).
If one expects to play only a single round against a specific opponent, then the optimum strategy in Prisoner’s Dilemma is to always defect. Similarly, in a population of individuals with no repeat encounters or within a species incapable of recognizing that an encounter is a repeat encounter, constant competition is favored over cooperation. More relevant to interactions among scientists, however, is the case of many repeat encounters where one remembers previous encounters with a given ‘opponent’. It is this situation that Axelrod and Hamilton [1981] modeled by a computer round robin tournament, first among 14 entries and then among 62 entries of algorithm strategies submitted by a variety of people of different professions. Subsequent computer studies by various investigators simulated the process of biological evolution more closely, incorporating variables such as natural selection (higher birth rate among more successful strategies) and mutation.
In nearly all simulations, the winner was one of the simplest of strategies: tit for tat.
Tit for tat cooperates on the first move, then on all subsequent moves duplicates the opponent’s preceding move. Axelrod and Hamilton [1981] call tit for tat “a strategy of cooperation based on reciprocity.” When tit for tat encounters a strategy of all defect, it gets burned on its first cooperative move but thereafter becomes a strategy of all defect, the only viable response to an all defecter. Tit for tat does much better against itself than all defect does against itself, and tit for tat also does much better against various other strategies, because mutual cooperation pays off more than mutual defection.
Axelrod and Hamilton [1981] prove that tit for tat meets the success criteria of initial viability, robustness, and stability for Prisoner’s Dilemma, and they argue that tit for tat is also a successful evolutionary strategy in various species from human to microbe (its reactive element does not require a brain). Some of their examples are highly speculative, while others such as territoriality ring true. Individuals in adjacent territories develop stable boundaries (‘cooperation’), but any attempt by one individual to encroach is met by aggression by the other. In contrast to this dominantly tit for tat behavior with the same individual, one-time encounters with encroaching strangers are consistently met by aggression (all defect).
Tit for tat does have two weaknesses. First, a single accidental defection between two tit for tat players initiates an endless, destructive sequence of mutual defections. Second, a tit for tat population can be invaded temporarily by persistent cooperators. An alternative strategy win-stay, lose-shift copes with these situations more successfully [Nowak and Sigmund, 1993]. This strategy repeats its former move if it was rewarded by a high score (opponent’s cooperation); otherwise, it changes its move. The strength of this strategy stems from the fact that cooperation by the opponent is more beneficial than their defection. Win-stay, lose-shift quickly corrects mistakes, and it exploits chronic cooperators.
It’s incredible that we scientists make decisions sometimes difficult, sometimes emotion-laden based on strategies similar to those used by some single-celled organisms. Success of tit for tat and win-stay, lose-shift in computer games of Prisoner’s Dilemma does not imply that these strategies are appropriate guides for interactions with fellow scientists. Experience shows that the extremes of total cooperation and total competition are also viable for some scientists, although the ‘hawks’ do take advantage of the ‘doves’. Some doves react to being repeatedly taken advantage of by becoming either bitter or hawkish. Tit for tat seems like a more mature reaction to being exploited than does rejection of all cooperation.
Which strategy is best for science? Both cooperation and competition are stimulating to scientific productivity, and in different individuals either appears to be able to give job satisfaction by fulfilling personal needs. Communication of scientific ideas is clearly a win-win, or non-zero-sum game [Wright, 2000]. On the other hand, academic science is being forced into an overly competitive mode by the increasing emphasis on publication records for both funding and promotion decisions [Maddox, 1993]. Personally, I enjoy cooperation more and I unconsciously seem to use tit for tat, achieving cooperation most of the time without the sucker’s disadvantage.
“And though I have the gift of prophecy, and understand all mysteries, and all knowledge; and though I have all faith, so that I could remove mountains, and have not charity, I am nothing.”
References:
Jarrard, R. D. (2001). Scientific methods. Online book, URL: http:// emotionalcompete ncy. com/ sci/ booktoc. html.
Download the Book in PDF from this link
Article References:
- Axelrod, R., and W. D. Hamilton, 1981, The evolution of cooperation, Science, 211, pp. 1390-1396.
- Maddox, J., 1993, Competition and the death of science, Nature, 363, p. 667
- Nowak, M., and K. Sigmund, 1993, A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game, Nature, 364, pp. 56-58.
- Rapoport, A., and A. M. Chammah, 1965, Prisoner’s Dilemma; A Study in Conflict and Cooperation, Univ. of Michigan Press: Ann Arbor.
- Wright, R., 2000, Non Zero: the Logic of Human Destiny, Pantheon Books: New York
Comments are closed.