When the model breaks... or does not work at all

Anyone who has ever tried to predict the future (even the near one) has been forced to build, implicitly or explicitly, a model of reality: a description of an object or a chain of events, focused on the matter of our interest and that intentionally leaves out elements that are thought to be of little or no import. For objects this model is called the "concept", like the concept of a table, which consists of a horizontal surface with some kind support (e.g. legs) that has enough strength to support the board and whatever you may deposit on top of it. Under this definition, a desk, a nightstand or a dining table are tables, but so are the counter of a bar, a wall-mounted shelf or the foldaway tray in your airplane seat. It is remarkable how well we succeed to define exceptions and even exceptions to the exceptions in order to make the concepts work. The archetypal table has four legs, but we have no problem recognizing single-legged tall bar tables, even if they might look very similar to some bar stools. In fact, I have seen my kids using tall stools as their own size-adapted tables.

Similarly, the moment we start to see the reality around us we also start to build models of it and of ourselves. Understanding that our hands are our own and that, in most cases, we get to decide what happens with them is a model of our hand: it does not take into account the weather, the state of the economy, the words your are uttering at the same time, just your mind, your hand, and the possible mechanical restrictions around it that might prevent it from doing what you want it to. These models get enriched and refined as we increase our experience of the world around us, but also second-hand by the recounts that other people make of their own experiences: if we see an awful review for a hotel we might dismiss it as biased and still give it a chance, but if we see a lot of them we will definitely incorporate that information to our model of the hotel and search for another lodging to spend our holidays (whenever they are possible again).

Photo: BBC World Service

Even if it is not all that obvious, every model makes a number of assumptions that are often not explicit but are still there. Going back to the reviews of the hotel, one assumes that a customer reporting that the sheets were not clean is not a germophobe (for whom every sheet is dirty). We also expect, possibly mistakenly, that all the rooms in the hotel receive a similar level of maintenance, so if a shower failed it is likely that many showers can fail. The same applies to the staff, that we expect to be more or less equally well disposed to help us, discounting the fact that they are in the end human beings and can have a bad night, a fight with their spouses or a toothache that sours their mood.

One type of model that I find particularly interesting is the one for repeated actions. It is generally a safe assumption that, if something has worked a few times, it will happen the same way every time we try. This is something that we learn very quickly with gravity: if I drop my pen once, twice, three times, it falls to the floor every time and then stops, and it is likely to happen like that regardless of how many times I do it. However, if I drop the pencil a sufficient number of times it is likely that it will eventually break, forcing me to add an exception to the rule: if you drop an object, it falls to the ground, but it might break on impact if it is already damaged. The underlying truth is that we are discounting the hidden damage that happens on every drop under the (implicit) assumption that it is small enough that it can be safely ignored. And that is correct in most cases.

A few years ago Karen and I were witnesses of a very interesting example of how someone's mental model breaks in sight of the evidence. We were at a wedding and, very early into the ceremony and in spite of our protests, a friend of ours, who is a very avid photographer, insisted on stepping up and taking a picture, even if there was an official photographer. She quickly came back to her seat, delighted with her treasure and flaunting at us the fact that nothing had happened. Surely enough, a couple of minutes later she went ahead and took a second shot, with an identical lack of response by anyone. Soon she was not even bothering to return to her seat between shots, but after twenty pictures or so the priest got too distracted and asked "would that young lady please take her seat and let the photographer and myself do our jobs?". She came back to her seat beside us, doubly infuriated by being singled out in front of the whole church and by us being right with our misgiving about her photographic exploits. Later on, at the banquet, we were discussing the event and she complained "I do not understand it. Nobody complained about the first two or three pictures. What is so special about the last one that got everybody so upset?". Of course, as in the previous example with the pencil, she was discounting the "damage", the irritation that she was causing on the priest, which, when accumulated, caused him to protest. Clearly, her model was not detailed enough and it stopped working with a sufficiently high number of repetitions.

However, this experience of accumulated events turning bad is not limited to negative events. Even positive events can turn negative when accumulated. Take the example of having an orange: it is certainly better than having no oranges at all; if you are hungry you can eat it, otherwise you can exchange it for something else. If you have two oranges, you are better than if you have one, because now you can eat one and exchange the other. Unfortunately, if you are not hungry you will have two oranges to exchange, and that is already harder to do than just one. If you go up to ten oranges, you start to have a storage or a transport problem, because you cannot transport them in your bare hands are you are certainly not going to eat them all at once. With hundred oranges you might be force to buy a couple of crates to hold them and they might even start to rot before you can make use of them. This is what the economists call the law of diminishing returns, which states that there is always a certain point where each additional unit of any good that you possess will bring less and less utility, to the point that it might start to cost you something (negative utility).

But there are many situations where the returns diminish really fast: eating one meal is great, but eating two meals in a single seating might be even beyond your physical capacity or, at least, leave you very uncomfortable. The same things happens with anything that is costly to own, like a car or a house: having one is very helpful, but having a second one will cost you double even if you can only use one at a time. If you have so much money that the cost of running the house does not impact you estate, you get the additional benefit of the choice: shall I drive the sports car or the sedan? Shall I stay in town or at the beach? But for us mere mortals, the limit of this kind of things is at one. And for others, like yachts, helicopters, football teams, private islands, the limit is at zero because, fun as it would be to have any of these things, their cost simply falls out of our reach. And you cannot do much flying with 0.01 helicopters.

So next time you find yourself with a prediction that has not been accurate take a look at the hidden assumptions that you might have made. Perhaps you broke the model by using it outside its conditions of applicability. Or perhaps there is a hidden factor somewhere that can strongly affect the outcomes. In the meantime, I hope that your weekend turns up as good as your expectations. See you on Monday.


Comments

  1. nice blog. very intresting to read this article about how can someone not break if their mental health is strong. this feel me so motivated. thanks for sharing us this blog.
    Bartender Service in Gurgaon

    ReplyDelete

Post a Comment

Popular Posts