Math lessons from an economist

One of the most astounding aspects of mathematics is how far reaching they can be: the tools are largely the same, with mostly small adaptations to one field or the other, but the principles and mechanics stay the same throughout all their applications. This can lead to funny situations like someone with alleged "lesser" mathematical education teaching a lesson to hard-core mathematicians.

A few days ago I had one of these moments listening to an interview with economist John List (do not mistake with the serial killer, as he often points out) in the Freakonomics Radio podcast. List has raised to prominence by proposing (and demonstrating) a very experiment-centered approach to economics in general and public policy in particular. For decades and centuries economics as a science was largely "axiomatic": there were a number of defining laws that "stood to reason" (but had not been demonstrated experimentally) and economists derived their conclusions from them, often with surprisingly poor results. List championed the need to take these theories into the real world and verify how much of them was true and where an amendment would be in order. And just by this apparently simple statement he opened up a whole new field now known as "behavioral economics" to distinguish it from the "classical economics", which are formally much clearer but terribly wrong from the practical point of view.

Photo: Vincent Desjardins

The interview circled around the effectiveness of public interventions and I was deeply impressed by one assessment that I have been oddly aware for years but never got to formulate in such a clear way. His insight is that we can only measure the change (and therefore the effect of any measure) a the edge: whatever change we want to promote there will always be a certain fraction of the people that already do that (e.g. not smoking) so they do not have to change to achieve the desired behavior; on the other end of the spectrum, some hard-liners will be absolutely refractory to any kind of measure and will not change no matter what we do to try to convince them; the change as such can only happen amongst those who have yet to change and are willing to do so under the right circumstances. This combination of being both able and willing to change is what economists call "the margin", not only in public policy but also everywhere else: if a factory has produced 100 items and decides to produce 10 more, that is what we call a marginal increase of the production by 10 units. Similarly the costs will increase in a certain amount, the marginal cost. In industrial terms, the first unit to be produced might cost thousands or millions, but the following ones will be significantly cheaper, so much so that the difference eventually offsets the initial cost and the factory can turn a profit.

In public policy the math is the same. If 60% of your population does not smoke, they are certainly not the target of a smoking cessation campaign. When confronting options against one another we have to pay attention to the marginal gain versus the cost: it is obviously much more efficient to reach 62% non-smokers for 2 million than 64% for 10 million. The comparison should be about the increment we get for the price we pay, and in this example the second program is 2.5 times more expensive than the first. Of course there is always the question of what happens to the additional 2% that do not give up smoking with the cheaper campaign. Would they react to a second campaign of similar characteristics?

Perhaps one of the best known cases of application of this marginal thinking is the "law of diminishing returns". If we are hungry and happen to obtain an apple, it is a great improvement in our situation (we get a big "utility" or "return"). Getting a second apple might also be helpful in case we are very hungry or as a reserve, and yet the return for the second apple (i.e. the marginal return) is smaller than for the first one. The marginal return for a third apple is still smaller, because we are certainly not going to eat it, so we will now have to carry it. It does not take many apples to land in a situation where the marginal return becomes negative: the additional benefit of getting one more apple is less than the increase in the cost and the trouble of carrying it or disposing of it.

Surprisingly, as an engineer I have always been painfully aware of this circumstance: when developing an application it is easy to catch the first errors, but as soon as you are done with the "easy" ones, detecting and fixing the remaining ones start to become prohibitively expensive in terms of both time and effort, so it is almost unavoidable that the product gets to the market with a few rare and hard-to-fix problems. Still I had never looked at it as the marginal cost of fixing the next error, which is a very insightful way of looking at it.

The other interesting insight provided by List is the so-called "return to the mean". He mentioned it in the context of pilot projects that work really well because in the experimental context one tends to make all the right decisions. However, this is statistically a rarity and, once the project gets scaled up, there will be occasions where some of the decisions will be less than optimal, and therefore the implementation of the scaled-up project will perform worse than the pilot or even fail completely. This phenomenon is called "return to the mean" because, on average, one will typically make a certain proportion of bad calls, and that will result in what we call the mean performance. Of course, if in one instance we take special care not to make mistakes, the performance is likely to be better, but it will a statistical fluke that cannot be maintained in the long run. Once we fall back to the usual level of attention we will start to make the normal amount of mistakes and our performance will literally return to the mean.

There are two aspects of this concept that I find really interesting. The first one is that it does not deny talent as such: for certain level of effort, the amount of mistakes will be different for two people, so the one with less errors is more talented. The second is that it also considers the effect of effort: by putting in more work almost everybody's performance can improve, even to the point of overcoming a certain "talent gap". The obvious question is if that level of effort can be sustained in the long run because, as with the natural rate of mistakes, the capacity to push themselves varies from person to the other.

One direct corollary of these assertions is that the best performance is bound to come from talented committed people, who not only have a naturally high rate of success but are also able to put the additional work to boost that. The second corollary links back to the concept of marginal effort: the better we are, the higher the necessary work investment will be to improve a certain amount. And there will always be a point where the increment in performance is just not worth the effort.

The two concepts that I have described here are somehow embedded and implicit in the kind of advanced math expected on an engineer, but it is quite funny to see them at work in a social setting, where we are so used to fuzzy statements and mind-numbing statistics.

Historically, the academic approach has had a tendency to compartmentalize, splitting knowledge into areas of "exclusive rights" for different disciplines, but these days it is becoming all the more evident not only that the tools can be applied across very different domains, but also that some of the results can be fed back into what we could call "scientific precursors". And the surprising aspect of it is that one never knows when an interesting result is going to come out, so it is a generally good approach to stay humble and keep an open mind for opportunities to learn. Have a nice week.

Comments

Popular Posts