Some praise and vilification of rationality in economics

12 minute read

Published:

Often when I explain to someone what I do in my studies, i.e. mathematically formalised social and economic models based on the hypothesis of the rationality of agents, I am told that the concept of rationality is something outdated and that, consequently, models based on this hypothesis are not valid. I sometimes had a tendency, perhaps a little pedantic, to mechanically quote the famous statistician George Box: ‘All models are wrong but some are useful’, which is always useful to bring out to shine in society. However, after discussing it, some reflections came up that I discuss here.

Rationality for economists

In fact the concept of rationality in economics is an old but rather debated and controversial concept. A rather incredible theorem on which the whole of modern economic theory is based is the following: a preference relation can be represented by a utility function only if it is rational. For my non-economist friends, let us detail here what a rational preference relation is. Rational preferences over a set of alternatives, say $X$, are defined by a complete and transitive binary relation, say $\succeq$. Formally :

  • (i) completeness: $\forall x,y \in X$, either we have $x \succeq y$ (i.e $x$ is at least as good as $y$) or $y \succeq x$ (i.e $y$ is at least as good as $x$) or both, that is $x \sim y$ (i.e I am indifferent between $x$ and $y$)
  • (ii) transitivity: $\forall x,y,z \in X$, if $x \succeq y$ and $y \succeq z$, then $x \succeq z$

This is an extremely strong definition. Indeed, the completeness axiom implies that an agent can always rank several alternatives among them, i.e. he always has somehow a relative opinion on each of the alternatives offered to him. As it happens, we do not always have an opinion on possible alternatives and it is difficult to make rankings, however approximate, of certain alternatives. Secondly, the axiom of transitivity is something that is not so often verified experimentally (cf. Condorcet paradox, 1785), surprising as it is since the intuition of this axiom is so trivial. We know, in fact, since Allais and his famous paradox (1953), that homo sapiens sapiens does not always make decisions in accordance with what orthodox theories predict. One of the ambitions of experimental game theory is to constitute a sort of data bank on the real behaviour of an individual that will allow the effective testing of alternative theories that theorists are likely to develop in order to compensate for the shortcomings of the theory of von Neumann and Morgenstern (the fathers of game theory who made a sort of remix of the above-mentioned theorem by integrating some other fancy axioms such as independence from irrelevant alternatives so that a stylish utility function can be found to represent a preference relation).

Now that the reader knows what a rational preference relation is, he or she will have no difficulty in realising that the implications of the above theorem are absolutely crazy. As a reminder, all modern economic theory, including game theory of course, makes extremely regular, if not systematic, use of these famous utility functions (for my non-econ friends, a utility function simply stands for the satisfaction of an economic agent). I invite the reader to stop reading this post for two minutes to realise the power of these assumptions.

The branch of the economy that is interested in analyzing the reasons and how individuals make their choice is called decision theory. However, social interactions between individuals are not directly studied in decision theory, and it is rather necessary to turn to the fabulous game theory to analyse these interactive decisions. A question that the reader will naturally ask is whether Bentham’s utilitarianism can provide a rational basis for the collective evaluation of decisions. This is indeed an interesting question and a fabulous economist by the name of Keneth Arrow thought pretty much the same by wondering whether it is possible to neglect the intensity of utilities and retain only their informational content of comparison as we have seen above (‘better than’, ‘worse than’). He demonstrates in his book Social Choice and Individual Values (1951), that if the evaluations of possible options by individuals are simple preferences, i.e. they are only qualitative and relative (in practice: a ranking of possible options) and therefore neglect the intensities, then collective choices cannot satisfy the criteria of rationality. Specifically, through Arrow’s impossibility theorem, he mathematically demonstrated that social preferences constructed by aggregating transitive binary relations are not always transitive! As a result, it is impossible to find a preference aggregation rule based on rankings alone that simultaneously verifies universality, unanimity and indifference to non-relevant alternatives, while being non-dictatorial. Again, the reader probably needs to pause for two minutes to realise the folly and genius of this impossibility theorem.

But let’s go back to what has been driving us since the dawn of time, namely game theory. I may be speaking with more jargon here (hoping it wasn’t already the case but on my first re-reading I unfortunately doubt it) and I apologise to the uninformed reader but rehashing a course in strategic game theory is not the objective of this blog post. In strategic games, something of the order of common knowledge (i.e. about the payoffs of each player in all possible configurations) is necessary for equilibria to emerge. But common knowledge potentially requires infinite computing power from each player. This is obviously not realistic. Just one example to illustrate my point : chess. By applying backward induction, it is theoretically possible to calculate a winning strategy for one of the two players or at least a strategy that guarantees a draw. The problem is that the number of nodes that need to be traversed within the tree structure induced by the extensive form of a chess game is of the same order of magnitude as the number of atoms in the universe! This is where bounded rationality comes in.

Bounded rationality

The main idea of the bounded rationality theory (from Herbert Simon) is that in a real situation, an agent is not able to find an optimal alternative or solution of a decision problem (e.g., chess, search for food by an animal, etc.) in the strict mathematical sense, because of limited resource (mainly time and energy, but also information, intelligence, memory, etc.), but the search for a solution will stop as soon as the agent has found a satisficing solution; ie, bringing a sufficiently high level of satisfaction. Quoting Samuel Eilon :

Optimizing is the science of the ultimate, satisficing is the art of the feasible.

Paradoxically - and tautologically - the way we model bounded rationality today may not be the best; nevertheless, it is already yielding satisfactory results, the importance of which economists will no doubt soon see in their modelling.

However, bounded rationality is not really the case for AI optimizers (I’m rather uncertain about the next few sentences, they reflect my current understanding of an optimizer but I may be wrong). The trouble with optimizers is that they will never be satisfied that their goals are 99% achieved. In fact even though it is only 99.9% certain that the goal is achieved, the AGI will restlessly try to increase the probability further – even if this means using the computing power of the whole earth to drive the probability it assigns to its goal being achieved from 99.9% to 99.91%. Perhaps it would be interesting to make future very powerful and optimising AI systems operate in the mode of bounded rationality in the Simonian sense. It could be something to work on as it could potentially lower the likelihood that an optimizer AGI could lead to catastrophic scenarios. Idk just droppin’ the idea but idk the technical tractability at all. However, careful readers might object that, in any case, powerful and complex AI systems such as TAI and AGI will not necessarily be well modelled as rational optimizers agents. For instance, black box AI models might be governed by a complex network of decision making heuristics that are not easily captured by a utility function to maximize.

Finally, to finish what I started in the first paragraphs of this blog post, I think it is appropriate to quickly go back to what economists generally think about this conceptual tool of the rationality of agents. Actually, most economists, even the most liberal ones, abandoned the myth of homo oeconomicus a few decades ago. The perfect rationality of agents has since been discarded (recall eg. Allais paradox) and even if many models are still based on this hypothesis, many sophistications have been implemented to better account for reality through, for example, taking into account uncertainty, risk aversion of agents, their belief about the state of the world (Bayes) etc. A closer look at the fabulous field of game theory is enough to show that rationality can be found in quite different ways and that it is not necessarily and simply understood in the sense of naive maximisation of the expected value or of the expected utility (if you don’t know the difference between the two : (1) omg it’s horrible (2) I suggest you take a look at the St Petersburg paradox to understand), but implied taking into account possible errors on the part of the opponents of each player, and taking this consideration itself into account in the calculations attributed to each of these opponents, as is the case, for example, in the definition of a perfect or sequential equilibria. Thus, the assumption of rationality should not be totally discarded simply because it is based on axioms that are not always representative of reality. For example, in voting theory, laboratory experiments show that agents tend to behave more like rational agents in the sense of Von Neumann Morgenstern (at least relatively more than in another situation in experimental game theory). As I mentioned above, most models in economics rely on some sophistication other than the simple rationality of agents to come as close as possible to reality.

However, the trouble is that mathematicians/economists are somewhat castigated for coming to unpleasant/irrealistic conclusions because the models they develop are based on reductive assumptions and therefore, by implication, these same mathematicians promote a model of the organisation of the social world based on a kind of Machiavellian cynicism. I think that there is a kind of confusion and intermingling between a positivist and a normative vision of what we are supposed to understand about economic models, whereas it seems to me that the two are more decorrelated than they seem. An example to illustrate my point. It is true that in classical Newtonian mechanics any isolated system tends towards a situation of equilibrium. Since mathematical economics borrows a substantial part of its analytical tools from physics, most of the dynamic theories it develops strive to show how an economic system should evolve towards a given equilibrium situation if the hypotheses we have used to describe it were correct. However, this does not mean that the equilibrium towards which our system would converge in the absence of any external intervention (by the State, for example) is desirable! I think this debate ultimately comes down to the question of whether rationality is merely instrumental, that is, agnostic about the logic of human action and its motivations (instrumental rationality) or does it substantially inform them (substantive rationality). Instrumental rationality is simply another formulation of Nick Bostrom’s orthogonality thesis: an agent’s intelligence and its final goal, i.e. its utility function, are decorrelated. This is in contrast to the belief that, because of their intelligence, most intelligent agents always converge to the same goal.

That’s all for me here, thanks for reading up to this point!