Incentive Schemes as a Verity to Scientific Knowledge

I was reading this very interesting story in the book “Why Zebras Don’t Have Ulcers” by Robert Sapolsky.

Two scientists, Roger Guillemin and Andrew Schally, were looking for a hormone that the brain produces that would give insight on the functionality of the pituitary glands, but the two scientists disliked each other so much that on one fateful night, they broke up and went separate ways. They were doing the same research, but in fierce competition with each other.

“Schally and crew were the first to submit a paper for publication saying, in effect, “There really does exist a hormone in the brain that regulates thyroid hormone release, and its chemical structure is X.” In a photo finish, Guillemin’s team submitted a paper reaching the identical conclusion five weeks later. One might wonder why something obvious wasn’t done a few years into this insane competition, like the National Institutes of Health sitting the two down and saying, “Instead of us giving you all of this extra taxpayers’ money to work separately, why don’t you two work together?” Surprisingly, this wouldn’t necessarily be all that great for scientific progress. The competition served an important purpose. Independent replication of results is essential in science. Years into a chase, a scientist triumphs and publishes the structure of a new hormone or brain chemical. Two weeks later the other guy comes forward. He has every incentive on earth to prove that the first guy was wrong. Instead, he is forced to say, ‘I hate that son of a bitch, but I have to admit he’s right. We get the identical structure.’ That is how you know that your evidence is really solid, from independent confirmation by a hostile competitor. When everyone works together, things usually do go faster, but everyone winds up sharing the same assumptions, leaving them vulnerable to small, unexamined mistakes that can grow into big ones. ” — pg 26-27

I found it super cool that we have more confidence in the conclusions drawn by the two scientists, because they have every reason to disagree with other, but begrudgingly shared the 1977 Nobel Prize in medicine.

Inferring intent from actions has been a common practice of mine, since I care a lot about the intentions behind actions. The main method I use to try to extract intent is to try to place an individual in a scenario (like the one mentioned above) in which all incentives but one point towards not doing action A, but he does action A anyways. Such an observation would best isolate the individual’s most important motivation for behavior A.

For example, we look at a particular behavior like “Why is person A nice to person B?” We would like to say “Well, it’s because person A is a good friend!”. However, if person B were rich, then one possible explanation could be that person A is nice to person B because person B is rich. We cannot differentiate between the two explanations because both contribute to explain the behavior. We can try to weigh the motivations but ultimately everything past that point would be speculation.

If you set up the scenario such that person A has every reason not to be nice to person B, but still is, then the only explanation you have left is “person A is a good person.” The conditional removed all other explanations from the behavior because it runs against the sole motivating factor for being nice to person B.

Of course, this method has its flaws. Incomplete enumeration of motivators is one of the flaws. However, if enumeration is proper or approximately complete, the test skews in favor of reducing false positive tests of intention while increasing the false negative tests of behavior.

All this discussion gave me two takeaways:

  1. Incentive schemes are good ways to figure out intentions and verify data. They have a low false positive rate.
  2. With all the subtleties, we begin to see that often figuring out intentions is usually much more complicated than this, since we don’t always get to observe scenarios like this.

Leave a Reply