The last chapter of Philosophy of Biology is about information and communication. A topic not discussed there is “costly signaling” models, or the “Handicap Principle.” This family of ideas has been very influential in biology over the last twenty years or more. It begins with a paper by Amotz Zahavi, published in 1975. Alan Grafen took up the idea in a series of papers, and a lot of people now regard the principle as pivotal in the evolutionary explanation of communication. I’ve always been a bit unconvinced, without being enough of an expert to be confident in my judgments.† But a few days ago, while working through a signaling model for an empirical application (which I’ll discuss here some other time), I worked out one thing that I think really is a problem – a mistake – in how these ideas are applied. Perhaps this argument has been covered by others before – it’s a huge literature – but I’ll go through it here in my own terms.
The story is often told like this. Early discussions of signaling by animals tended to buy into a rather naive and cooperative picture of animal life, in which information exchange was not surprising. People like John Maynard Smith, Richard Dawkins, and John Krebs criticized this work in the 1970s as part of a shift towards a more rigorous and gene-centered view of evolution. When interests between animals do not coincide, we should expect signaling systems to be undermined by dishonesty, bluffing, or withholding of information. Zahavi, thinking about empirical cases, realized that one way for honesty to be maintained is through a cost associated with signals. If signal use is costly and there is some reason why dishonest senders of signals pay more than honest ones, then informative signaling can be stable. Honest signalers voluntarily “handicap” themselves in their communicative behaviors, as dishonest ones can’t afford to do so.
The model I was looking for a few days ago was a model of signaling between opponents in fights. For example, how could honest signaling of aggressive intention, or fighting prowess, be stable in the face of temptations to exaggerate or bluff? William Searcy and Stephen Nowicki, in their very good 2005 book The Evolution of Animal Communication sketch the problem to be solved in the familiar way, and then they say: “Honesty in aggressive signaling can be rescued by Zahavi’s handicap principle. The first theoretical model to show explicitly how this rescue can be effected was provided by Enqust (1985).” They note that Grafen, in his seminal 1990 paper, also interpreted the Enquist paper in this way. So how does Magnus Enquist’s model work? The main ideas are simple (and here I follow Searcy and Nowicki’s presentation). Suppose a population contains strong and weak individuals who compete for some resource. Two signals are available, A and B. The value of the resource is v, the cost of losing a fight if you were equally matched with your opponent is c, and the cost of losing a fight if you are a weak individual who fought a strong one is d. (Those losses are assumed to be more damaging.)
Consider the following behavioral rule:
If you are strong, then when you encounter another individual, initially produce signal A. If the other animal produces A, then attack. If the other produces B, repeat your A signal and attack if the opponent does not concede.
If you are weak, then when you encounter another individual, produce B. If the opponent produces A, then back down. If the opponent produces B, then attack.
This rule is “honest” because signals are reliably correlated with strength.* Can a population playing this strategy be invaded by a “bluffing” type which produces A whether strong or weak? These individuals when they are weak would produce A and hence always beat the other “honest” weak individuals, but would end up in fights with strong individuals, who do not back down.
Assume, for simplicity, that strong and weak types are equally common in the population. (It’s best to assume that strength is not inherited – it’s a consequence of something like food supply in your early years.) Assume that if two individuals of the same type end up in a fight, their chance of winning is 1/2. If you win such a fight you gain v – c. If you win against an opponent who backs down without fighting, you gain v. If you back down, you neither gain nor lose anything. If you lose a fight, your loss is c or d, depending on the nature of the fight, as described above. Working through the algebra, it turns out that the honest behavioral profile can resist invasion by the dishonest one provided that: d – c > v/2. Honesty can be maintained if it is very dangerous for a weak individual to get into fights with strong individuals. If that cost (d) is high in relation to c and v, then honesty is better.
That is a nice simple model. I think it has nothing to do with the idea that “signal cost” can maintain honesty. The signals themselves, A and B, are assume to be free (as Grafen notes). It’s true that the dishonest type “pays a cost” that the honest type does not pay, as the dishonest type risks those dangerous fights. But that is just an ordinary part of the payoffs governing the situation; it’s not a feature of signaling. That d-versus-c asymmetry would still exist if there was no signaling going on at all, as long as weak individuals sometimes end up fighting strong ones, and suffer more when they lose. The Zahavi idea was that populations will evolve signal systems that are intrinsically costly to use, because dishonest individuals can’t afford to use them: colorful plumage, huge antlers. That is one possible way for honesty to be maintained, but not the only way. Another way is for the risks of being caught bluffing to be too high – that is the essence of the Enquist model.
Is this just a verbal matter, which has to do with how the word “cost” is interpreted in the phrase “costly signaling.” To some extent it is, but let me make the case that this matters. Here are some of the final paragraphs of Grafen’s 1990 paper. First, an interesting word of introduction:
“Some readers of an earlier version of this paper have flatteringly suggested that the signalling games are my own invention, and that the connection with Zahavi’s writings on the handicap principle is rather remote…. To show that the connection is strong, I want to emphasize how simple the basic arguments are.”
“Granted that a signalling system exists, and that receivers are behaving selfishly, it must be that signalling is honest. Receivers could evolve a different rule of interpretation, but, at the equilibrium, a different rule could not be advantageous. This argument for honesty is extremely general.”
OK so far.**
“Now suppose the interests of the signaller are not served by such an accurate interpretation of the signal. How can it be that the signaller does not choose to alter his signal to exploit the interpretation of the receivers? It must be that it would be costly to do so. Hence the only guarantee of honesty on the part of the signalers can be that giving what would otherwise be “advantageously untruthful” signals must be costly.”
That is true, so long as the word “costly” is understood very generally – so it refers to anything about the situation that penalizes a dishonest sender. Anything that makes a dishonest sender worse off is sufficient. That can reasonably be called a “cost” for dishonesty – or a penalty, or a disadvantage; these mean the same thing when understood so broadly. But the production of signals themselves need not be costly for anyone.
“Suppose further that the signallers lie on a one-dimensional continuum of the quality signalled, and that to be assessed as of higher quality is advantageous. Then for a lower quality of signaller not to gain by ‘pretending’ to be of higher quality, it must be that the signal that means ‘I am of high quality’ is more costly to the low quality than to the high quality male. Hence signalling more must be more costly to worse males.”
Again, what follows is just that something about the situation – which might be a special set of interactions that the dishonest individuals are more likely to get into – penalizes dishonest senders.
“These verbal arguments are really just as convincing as all the mathematics, and their language makes clear the strong connection with Zahavi’s arguments. This shows that the models given in this paper really are models of Zahavi’s handicap principle.”
This is where I disagree. Zahavi thought he had found a particular mechanism that would enforce honesty. He did not think he was just restating the truism: “if honesty is maintained then something must be penalizing dishonest individuals.” But the verbal argument above is just a version of that truism.
I do not say that Grafen’s models are trivial – far from it. Nor do I think Zahavi’s ideas are trivial, or of no value. Again, far from it. But I do disagree with Grafen’s interpretation of these models, also with Searcy and Nowicki’s. And the attempt to regard Zahavi’s “Handicap Principle” as a completely general solution to the problem of honest signaling in situations of conflicting interests is a mistake. In particular, the Enquist model is a smoking gun: the only way to broaden Zahavi’s principle so it covers the Enquist model is to make Zahavi’s principle trivial.
• • • • •
Perhaps what I have above is enough. But here’s a bit more material, from both the sources I’m discussing.
Searcy and Nowicki say of the Enquist model: “the requirements of the handicap principle are met, in the sense that the more effective signal is costly, and the cost falls more heavily on individuals of lower quality.” When they say “the more effective signal is costly,” I assume they mean this: if you send message A – whoever you are – you are more likely to get into fights, and those fights tend to go badly for weak individuals. I think it’s misleading to say here that “the more effective signal is costly.” The signals themselves are free. What’s not “free” is what happens if you say the wrong thing to the wrong people.
Grafen, on his first page, says that his paper “affirms Zahavi’s (1987) claim that natural selection on a wide class of signals necessarily incurs waste in accordance with the handicap principle.” Again, I don’t think there’s “waste” in the signaling behaviors of Enquist model. There’s just the risk of saying too much.
† In a couple of papers written with Manolo Martinez, we show that informative signaling is possible at equilibrium despite massive divergence of interests and no signal costs. These are 3-state games, and fairly quirky in their structure. I think they suffice (along with others discussed by other people) to show that signal cost is not strictly necessary for the maintenance of signaling, but these games might be seen as biologically unrealistic and unimportant. The Enquist model, discussed here, is not like that.
There’s a bit of discussion costly signaling in another of my papers, here.
* You might say it’s also “honest” in another sense. Given the way signals are being interpreted (back off if you are weak and you see A, for example), these signals have content or significance for receivers, and this content is never deceptive. Receivers always produce actions well-suited to the underlying state of the sender.
** OK I think…. I am not completely sure how to interpret that bit of the passage. This might connect to the note just above (*).
*** People sometimes see the task of philosophy of science as “clarifying” scientific concepts. That’s not what I think (see Chapter 1 of Phil. Bio, and this paper). In this post, though, I am doing some clarifying. However, I don’t think what I do here is particularly philosophical. It’s just taking a close look at some scientific arguments.