Jekyll2023-12-02T10:19:10-08:00https://angelinagentaz.github.io/feed.xmlWebsite homepageAngélina Gentaz website artificial intelligence, effective altruism, animal welfare, game theory, economics, physics, world modelisationAngélina Gentazangelina.gentaz@gmail.comFunny and crazy mathematical beings2023-05-31T00:00:00-07:002023-05-31T00:00:00-07:00https://angelinagentaz.github.io/posts/2023/05/maths<p>This post is just a small collection of mathematical concepts that I thought were crazy when I first heard about them. These are some level of abstraction that are sometimes purely aesthetic per se. Enjoy! I might basically update regularly this post along the way I discover some other crazy mathematical beings.</p>
<h3 id="metrics-spaces">Metrics spaces</h3>
<p>Imagine you’re on a trip. You’ve got a map, you’re trying to figure out how far you are from your destination, or perhaps you’re looking for the shortest path to it. Well, in a way, that’s exactly what a metric space does - it generalizes the concept of ‘distance’ or ‘closeness’ between points in a set. It’s not just for our usual three-dimensional world, but for more abstract spaces as well.</p>
<p>Thing is, we’ve always been used to taking as a reference that a distance between two points is basically a segment between those two points. It’s just super intuitive, you take Pythagore theorem and done. But in fact we could very well imagine another definition of distance. You could imagine, for example, that the distance could be a curve instead of a straight line. I mean why not? Well, that’s where the usefulness of defining what we mean by distance comes in, i.e. establishing a metric. A metric space, fundamentally, is a set together with this ‘distance function’ (also called a metric then) that, much like our trusty road map, gives us the ‘distance’ between any two points in the set. But just realise that this is something that is quite funny in itself because a set is an ultra-vast mathematical concept and can contain almost anything and everything in terms of properties or mathematical beings contained within that set. So like defining the metric is basically the key starting point, otherwise you can’t do that much… The notion of distance is like the very basic and probably most important thing in maths, I mean, look at the definition of continuity. You can’t handle that much mathematical concepts if you don’y understand properly why distance is such a usefull thing to define I guess. Without distance, the beautifully intricate world of calculus would simply collapse.</p>
<h3 id="functional-operators">Functional operators</h3>
<p>Simply put, it’s a kind of operator that takes one or more functions as input and produces a new function as output. It’s like a function for functions! It’s a step up in the level of abstraction from our usual functions that take numbers or other values and return numbers or other values. You might thus recognize easily that derivatives and integralss are functional operators. Ones of the most important results here are several fixed point theorems. The best-known fixed point theorem is probably the Brouwer fixed point theorem, which applies to functions from a compact convex set to itself. Without fixed point theorem, game theory will basically collapse since almost every theorems Nash showed involves a fixed point theorem at some point (lol).</p>
<h3 id="fuzzy-topological-space">Fuzzy topological space</h3>
<p>To understand Fuzzy Topological Spaces, we need to start with traditional topology. In classic topology, we look at mathematical ‘spaces’ and their properties - think of how things connect, cluster, and interact. We’re interested in concepts like continuity, compactness, and convergence, among others. In these spaces, something either definitely belongs to a set or definitely does not - it’s binary, a world of crisp, clear distinctions. Thing is, we often deal with ambiguity, vagueness, and uncertainty in life right? Fuzzy logic allows us to mathematically handle ‘degrees of truth’, that is, we can say that something belongs to a set to some degree, between 0 and 1. A fuzzy topological space is just like a regular topological space, but instead of classic sets, we deal with fuzzy sets, where membership is a matter of degree. This new kind of space is defined by a fuzzy topology, a collection of fuzzy sets satisfying properties similar to those in classic topology, but nuanced by the fuzziness.</p>
<h3 id="fourier-series">Fourier Series</h3>
<p>Imagine a vast, tranquil lake. Now toss a pebble into it. What happens? Ripples, right? Waves start spreading out from the point where the pebble hit the water. These ripples are basically a beautiful metaphor for understanding the magic of Fourier Series.</p>
<p>Fourier discovered that you can break down any periodic function (which repeats itself over time) into a sum of simple sine and cosine waves, each ‘playing’ at a different frequency and amplitude. It’s as if your complex function is a grand musical symphony, and the Fourier Series reveals each individual instrument playing its part!</p>
<p>Formally, for a function $f$ with period $2\pi$, $f(x) = a_0 + \sum_{n=1}^{\infty}(a_n\cos(nx) + b_n\sin(nx))$</p>
<p>$a_0$ is the constant term in the Fourier series. It is equal to the average value of the function over one period.
$a_n$ is the nth coefficient of the cosine terms in the series. It measures how much of the cosine wave with frequency $n$ is present in the function $f(x)$. If $a_n$ is large, then the cosine wave with frequency $n$ is a significant part of $f(x)$. Likewise for $b_n$ and sine.</p>
<p>Ok let’s better undertsand why I think Fourier series is an incredibly powerful concept. As you can see above, the function $f$ has been decomposed in an infinite sum of cosin and sin. You can literally write any periodic function into an infinite sum of cosine and sine. It’s like having a mathematical X-ray machine that lets us see inside a function and understand its intricate structure in terms of simpler, harmonic components.</p>
<h3 id="decision-trees-with-continuous-action-spaces-in-extensive-form-game">Decision trees with continuous action spaces in extensive form game</h3>
<p>In a traditional extensive form game, we generally deal with discrete strategy spaces (and usually, to make things even esaier, only 2 strategies are considered like, idk, defect or cooperate). At every decision node, a player has a finite set of actions they can choose from. Thing is, we usually face a lot more strategies so that the set of strategy can eventually be considered as infinite. How do take thta into account? Well, in an extensive form game, a continuous action space (contrary to the discrte case we mentionned above) is typically drawn using an arc connecting two branches representing the upper and lower bounds of the action space.</p>
<p>The use of an arc to represent a continuous strategy space in an extensive form game is a powerful way to capture a whole spectrum of strategic possibilities that a player can choose from. It’s a marvelous way to represent and analyze complex strategic interactions where the players have an infinite number of choices.</p>
<p>This difference is incredibly powerful as it allows us to model and understand a broader range of real-world strategic interactions. Whether it’s negotiations, pricing strategies, or any scenario where decisions aren’t merely discrete choices, the use of continuous strategies in extensive form games truly broadens the horizons of strategic decision making in GT.</p>
<p><img src="/images/tree.png" alt="decision tree" /></p>
<h3 id="stochastic-differential-equations">Stochastic differential equations</h3>
<p>One day I was wandering around my university library, glancing casually at some of the book titles on the shelves. Suddenly, one title struck me. ‘Stochastic differential equation’. I stopped immediately, shocked by the title, which seemed to me to be an oxymoron. I had always learned that differential equations were, by definition, something purely deterministic: given the initial state of the system (the initial conditions), the differential equation provides a precise and purely deterministic formula for how the system evolves over time, i.e the future state of a system is entirely determined by its current state and the laws of motion.</p>
<p>Thus, how on earth do you have the audacity to add the qualifier ‘stochastic’ to a differential equation?? Well, thing is, uncertainty and variability and like almost everywhere in a lot of world models. An SDE is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process, capturing the eventual uncertainty or variability. This means the future state of the process isn’t determined solely by the current state, but also involves a random element. SDEs are instrumental in many fields, especially in physics, economics, and engineering, where systems are subjected to random influences. In finance, for instance, they’re used to model assets or instrument prices because they capture both the general trend (like interest rates) and the random market fluctuations. One canonical example of SDEs is the Brownian motion.</p>
<p><em>example</em> :</p>
<p><img src="/images/stochasticeqdiff.png" alt="SED" /></p>
<p>This graph shows five sample paths $X_t\left(\omega_1\right), X_t\left(\omega_2\right), X_t\left(\omega_3\right)$ and $X_t\left(\omega_4\right)$ of a geometric Brownian motion $X_t(\omega)$, i.e of the solution of a (1-dimensional) stochastic differential equation of the form:</p>
<p>$\frac{d X_t}{d t}=\left(r+\alpha \cdot W_t\right) X_t \quad t \geq 0 ; X_0=x$</p>
<p>where $x, r$ and $\alpha$ are constants and $W_t=W_t(\omega)$ is white noise. This process is often used to model ‘exponential growth under uncertainty’ (example coming from the book <em>‘Stochastic Differential Equations: An Introduction with Applications’</em>, Bernt Øksendal, 2003)</p>
<h3 id="multi-objective-optimization">Multi-objective Optimization</h3>
<p>OK, so we’re used to solving optimisation problems that involve maximising or minimising a single function. Like simply finding the max or the min. Whether it’s a multivariate, static, dynamic, discrete or continuous optimisation problem. But in my bachelors and even first year of master in economics, we’ve hardly talk about, well, how do we do if we basically want to optimize several functions? Like finding the max or the min of several functions almost simultaneously. Each of these objective functions to optimize typically represents a different goal that needs to be satisfied.</p>
<p>As you may probably intuitively understand, in such scenarios, it’s usually very hard to find a solution that optimally satisfies all objectives simultaneously. Hence, rather than finding a single “optimal” solution, multi-objective optimization aims to find a set of “Pareto optimal” solutions. As you may intuitively understand, a solution is Pareto optimal if there’s no other solution that improves one objective without worsening at least one other objective.</p>
<p><em>General form:</em> Let’s assume that we have $n$ decision variables, $k$ objective functions, $j$ inequality constraints, and $m$ equality constraints.</p>
<p>Minimize/Maximize: $f_i(x)$, for all $i$ in ${1, 2, …, K}$</p>
<p>Subject to:</p>
<p>$g_j(x) <= 0$, for all $j$ in ${1, 2, …, J}$</p>
<p>$h_m(x) = 0$, for all $m$ in ${1, 2, …, M}$</p>
<p>With bounds:</p>
<p>$L_i <= x_i <= U_i$, for all $i$ in ${1, 2, …, n}$</p>
<p>Just as said before, the solution to this optimization problem may not be a single point, but a set of Pareto optimal points.</p>
<p>Multi-objective decision making is a facinating field arising from multi-objective opti problems. For multi-objective games, each player doesn’t have just one single objective (or utility function); instead, they have multiple objectives (multiple utility functions) they want to optimize. As such, the payoff for each player for each strategy is not a single number, but a vector, where each element represents the payoff for one objective! Here’s an example with a specific 2x2 game with vector payoffs:</p>
<table>
<thead>
<tr>
<th> </th>
<th>Player 2 Strategy 1</th>
<th>Player 2 Strategy 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>Player 1 Strategy 1</td>
<td>(3, 4), (2, 5)</td>
<td>(1, 6), (7, 2)</td>
</tr>
<tr>
<td>Player 1 Strategy 2</td>
<td>(5, 2), (6, 3)</td>
<td>(4, 1), (5, 4)</td>
</tr>
</tbody>
</table>
<p>As you might guess, finding the Nash equilibrium in this kind of game is more complicated. How do you know if (3,4) is better than (1,6)? Determining these kinds of equilibria typically requires more advanced techniques. But for the record, typically, we might say that a payoff vector is better if it’s better in at least one objective and no worse in any other objective.</p>
<h3 id="randomized-social-choice">Randomized Social Choice</h3>
<p>Randomized Social Choice comes into play when deterministic social choice functions just can’t cut the mustard, or when there’s simply no one-size-fits-all solution. Rather than forcing a single choice, we allow for a distribution of choices – like tossing a weighted dice that can land us in different outcomes, each with a probability reflecting the players’ preferences. Instead of each person voting for one alternative, they vote for a probability distribution over the alternatives.</p>
<h3 id="vector-fields">Vector fields</h3>
<p>A vector field in its simplest form is an assignment of a vector to every point in a space. Super usefull in physics as you may understand (e.g through visualizing flows and forces). Formaly, a vector field on a subset $U$ of $\mathbb{R}^n$ is a function that assigns to each point in $U$ a vector in $\mathbb{R}^n$. In mathematical notation, this can be expressed as:</p>
<p>$F: U \rightarrow \mathbb{R}^n$</p>
<p>Vector fields can have interesting topological features, such as zeros (where the vector field is zero), sources and sinks (where vectors are diverging or converging, respectively), and vortexes and saddles (where the behavior of the field is more complex). Vector operators, also known as differential operators, are tools used to analyze vector fields and scalar fields. Two of the most commonly used vector operators are the divergence (a scalar field that provides a measure of the vector field’s source or sink at a given point) and the curl (a vector field that describes the infinitesimal rotation of the field in three dimensions. In other words, it measures how much and in what direction the field ‘rotates’).</p>
<p>And here’s the truly mind-bending part: you can have vector fields not just in two or three dimensions, but in any number of dimensions. For instance, in the field of differential geometry, vector fields on manifolds (which can be thought of as multi-dimensional surfaces) are a central object of study.</p>
<p><em>I thank chatGPT for helping me to write (hopefully) this blog post more clearly.</em></p>Angélina Gentazangelina.gentaz@gmail.comThis post is just a small collection of mathematical concepts that I thought were crazy when I first heard about them. These are some level of abstraction that are sometimes purely aesthetic per se. Enjoy! I might basically update regularly this post along the way I discover some other crazy mathematical beings.Some fun theoretical models of risky technological innovation race2023-01-28T00:00:00-08:002023-01-28T00:00:00-08:00https://angelinagentaz.github.io/posts/2023/01/blog-post-1<p><strong>Keywords</strong>: risky technological innovation and competition; safety-performance trade-off; R&D and innovation race; public safety risks</p>
<h2 id="introduction">Introduction</h2>
<p>[Knight, 1936] viewed contests, or races, as a fundamental element of economic life, stating:</p>
<blockquote>
<p>“The activity which we call economic, whether of production or of consumption or of the two together, is also, if we look below the surface, to be interpreted largely by the motives of the competitive contest or game, rather than those of mechanical utility functions to be maximized.”</p>
</blockquote>
<p>Races are situations in which an individual’s reward depends on his performance relative to others. In a race, the largest - and perhaps only - prize is awarded to the first participant to cross a well-defined finish line. Investment in research and development (R&D) for a given technological innovation usually has several of the characteristics of a race. In such a race to be the first innovator, firms have to decide about the optimal timing of their R&D investments taking into account the dynamic interactions between competitors.</p>
<p>In his classic book <em>Capitalism, Socialism, and Democracy</em>, [Schumpeter, 1942] emphasized the connection between market structure and R&D. While he argued that allowing the formation of monopolies is necessary to incentivize the innovation process, I reverse the point of view in this present analysis by investigating how innovation may influence market structure rather than the converse as it has already been largely studied in the clas- sic industrial organization literature ([Arrow, 1962]; [Kamien and Schwartz, 1982]; [Aghion and Howitt, 2005]) In fact, in recent decades, innovation race as a field of research has undergone profound changes both in terms of analytical interpretations and in terms of application areas. These upheavals in the field of innovation race are the starting point for my reflection.</p>
<p>Specifically, I take as given the target of innovation and do not study the innovation’s impact on the broader economy from a macroeconomic perspective such as studying its impact on growth ([Aghion and Jaravel, 2015]; [Aghion et al., 2001]). The rationale behind taking a pure game-theoretical point of view – and its adjacent fields such as market design, e.g mechanism design or auction and contest theory – is that given the basic features of a race, game theory provides a suitable framework to study such strategic decision-making and is a powerful tool to analyze the risks and rewards of competitive innovation, both for the individual players and for the society as a whole. Behind this first consideration, the incentive to cut safety in a technology race mirrors obviously the temptation to defect in the prisoner’s dilemma. Indeed, while risk in the economics of innovation has almost always been seen trough the firm’s interest ([Mata and Woerter, 2013]), competitive technological innovation can also create public safety risks, as the pursuit of innovation and profit can sometimes lead to companies ignoring or downplaying potential risks associated with their products or services.</p>
<p>Starting from these distinctions, I break down the analysis into separate issues. First, in section one, I review the economics literature that gives insights on (i) the link between innovation race, market structure and its different interesting models (usually game-theoretic ones) and (ii) on public risk externalities created by innovation race - and in particular by AI race. In section two, I give some basis to model simple technological innovation race and focus once again on an AI race through a Do It Yourself (DIY) proposition.</p>
<h2 id="1-literature-review-on-innovation-race">1. Literature review on innovation race</h2>
<h3 id="11-classical-game-theoric-innovation-race">1.1 Classical game theoric innovation race</h3>
<p>There has been a significant amount of research conducted by economists since the 1970s on the topic of technological competition between firms, with the development of various “racing” models to explain this phenomenon. The classical industrial organization literature usually predicts that, in an innovation race, asymptotically, the economy converges to concentration or dominance of a single firm, i.e a monopoly. However, depending on the theoretical conceptualization of rivalry and on whether the model is deterministic or stochastic, several papers offer sharply different predictions by showing that this increasing dominance outcome can be challenged by an “action-reaction” effect.</p>
<p>[Horner, 2004] finds that it is not necessarily true that competition is fiercest when competitors are close, as could be suggested by some existing models of [Aghion et al., 1997]. The model of Horner seek to extend the classical first model of [Aoki, 1991] by assuming that the technology is not restricted to being deterministic, i.e. an investment level generates a probability distribution over outcomes! The most significant factor is the joint profit effect, which refers to the idea that on average, the joint profits from the product market are higher when the gap between the firms grows rather than shrinks. This leads the leader to put in more effort than the laggard. This joint profit effect is relatively well understood and is the driving force behind traditional analyses of patent races ([Grossman and Shapiro, 1987], [Harris and Vickers, 1987]; [Loury, 1979]).</p>
<p>Indeed, patent race is a very important part of literature when it comes to competitive innovation. In these papers, the joint profit effect typically supports increasing dominance, with the leader tending to get further ahead over time. The classical work of [Loury, 1979] on patent race has been largely extended ([Lee and Wilde, 1980], [Reinganum, 1982]). In such memoryless R&D race models, the knowledge that firms have acquired as a result of their past R&D efforts is irrelevant to their current R&D efforts. The rationale behind this strong assumption is that the time of a successful innovation is exponentially distributed.</p>
<p>However, some models aim to overcome the memorylessness property of traditional R&D race models. Multistage race models are designed to account for the influence of past events on the competition by introducing intermediate steps in the research process. In these models, a firm must complete all stages of the R&D project in order to win the race. Deterministic multistage race models assume that firms move from one stage to the next in a predictable way ([Fudenberg et al., 1983]; [Lippman and McCardle, 1988]). An other important contribution is due to [Doraszelski, 2003] where his model does not take into account this memoryless property and finds that under some conditions, the firm that is behind in the race engages in catch-up behavior.</p>
<p>An other part of the literature that study innovation race is economic models of auction ([Grossman and Shapiro, 1987]). [Siegel, 2009], study “all-pay contests” which capture general asymmetries and sunk investments inherent in scenarios such as R&D races. All-pay contests or auction are contests in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder. In many settings, economic agents compete by making irreversible investments before the outcome of the competition. Innovation races are an example in point.</p>
<p>In a nutshell, when comparing deterministic, the auction and stochastic racing models, the resulting equilibrium outcomes can sometimes be significantly different. Given these potential variations in results, it is crucial to select the appropriate paradigm. The stochastic racing model appears to better capture the nature of R&D, which may or may not yield a successful outcome and can require more or less time and resources than anticipated. In the case of development or introducing a new product, the auction model may be the preferred choice if the technological uncertainties have already been resolved.</p>
<h3 id="12-innovation-race-with-safety-compliance">1.2 Innovation race with safety compliance</h3>
<p>The public safety risks created by competitive innovation are understudied in the economics literature. There are two aspects of innovation that together create a risk externality: (i) innovating creates a risk of widespread negative consequences if agents under-invest in safety, but (ii) developing safely is costly for the agent by placing it behind in an innovation race. Under certain conditions, the twin effects of widespread risk and costly safety measures may cause a “race to the bottom” in the level of safety investment. In a race to the bottom, each competitor skimps on safety to accelerate their rate of development progress.</p>
<p>The <em>Collingridge Dilemma</em> highlights the challenge of predicting and controlling the impact of new technologies, especially in the field of AI. Due to the lack of available data and the inherent unpredictability of this technology, it may be beneficial to use a modeling approach to gain a better understanding of the potential consequences of a race for powerful AI systems. Thus, the case of an AI race is interesting for several reasons. Firstly, the risk of negative externalities is inherent to the development and implementation process itself ([Armstrong et al., 2015]) . Thus, one can think of this as an endogenous deadline rather than an exogeneous one (i.e a well finished deadline) as it is almost always the case in the traditional race literature. Secondly, risky technological races and more particularly the AI race differ from traditional arms races in several ways. Arms races typically conclude with the declaration of war, while technology races may end either when (i) one party successfully develops the technology or (ii) when too much risk has been taken, and a disaster has occurred. Furthermore, the AI race and its associated risks are particularly distinctive from any other technological race (cf. work of the Centre for the Governance of AI).</p>
<p>Most of the race models we have seen so far only allow for two players, which is obviously quite restrictive. Thus, some papers in the literature stand out by investigating innovation race trough network games. [Cimpeanu et al., 2022] enlighten the fact that it is important to understand how diversity in the network (different collaboration networks eg. among firms, AI researchers, stakeholders etc.) influences race dynamics as some players might play a pivotal role in a global outcome. They examine various network structures, including homogeneous ones like complete graphs and square lattices, as well as various scale-free networks that represent varying levels of diversity in the number of co-player races a player can participate in. As in the case of climate change games ([Santos and Pacheco, 2011]), the perception of risk is a key factor driving the evolutionary outcome (whether or not disaster happens).</p>
<p>The role on information as it has already briefly been stressed above is crucial to determine the race dynamics. [Armstrong et al., 2015] find, counter-intuitively, increasing the information available to all the teams (about their own capability or progress towards AGI, or that of the other teams) increases the risk of a catastrophe. Their results alos show that competition might spur a race to the bottom if there are too many teams.</p>
<p>[Naudé and Dimitri, 2018] try to see what kind of public policy mechanisms (introducing intermediate prize, taxation, public procurement of innovation, patent) to implement in a risky technological race such as an AI race in order to safely control the nature of AI. The authors found that the intention of teams competing in an AI race as well as the possibility of an intermediate outcome (prize) is important when considering safety levels.</p>
<p>In a nutshell, it seems that even though there has been minimal focus on analyzing the dynamics of emerging risky technological race such as an AI race, such models offer sharply different predictions on (i) how a firm’s tendency to innovate in a race is impacted by its technological position relative to the others competitors in a given network structure and the level of information openness in the race and on (ii) how the “safety-performance” trade-off might evolve over the race.</p>
<h2 id="2-diy-your-own-ai-race-game">2. DIY your own AI race game</h2>
<p>What are the main characteristics, parameters and variables to consider of an AI race? One could think of different hypothetical scenarios and conditions of the race concerning:</p>
<ul>
<li>disaster risk,</li>
<li>risk-perception behaviours,</li>
<li>level of information openness,</li>
<li>number of racing teams,</li>
<li>level of heterogeneity of teams’ development capacity</li>
<li>incentives deployment,</li>
<li>entry cost,</li>
<li>types of teams (AI gov lab, private firms, collective eg. Stable Diffusion),</li>
<li>network of teams (i.e from a more macro and structural perspective, the different ties racing teams might have through the network topology)</li>
</ul>
<p>I hope that the literature review written above has already given the reader some ideas of concepts and methods to be used to model an AI innovation race but the main tools remain probably game theory, networks and computational methods such as agent-based modelling. However, I have still a too narrow view on AI governance to say how important and influential the present parameters and variables are on the results of an AGI race. Still, since companies play currently an outsize role in AI R&D compared to academic or government groups/labs, the need to focus on corporate governance seems to be a crucial thing to consider for AI governance people.</p>Angélina Gentazangelina.gentaz@gmail.comKeywords: risky technological innovation and competition; safety-performance trade-off; R&D and innovation race; public safety risksMy favorite game theoric concepts and impossibility theorems2022-12-22T00:00:00-08:002022-12-22T00:00:00-08:00https://angelinagentaz.github.io/posts/2022/12/blog-post-1<p>Game theory is the name given to the methodology of using mathematical tools to model and analyze situations of interactive decision making. It is thus an incredible powerful tool to (i) predict the behavior of several decision makers and (ii) provide decision makers with suggestions regarding ways in which they can achieve their goals. Starting with this basic definition, the reader probably has his head in a twist now as the number of possible applications of this incredible tool are numerous, not to say infinite.</p>
<p>In this blog post, I give my favorite game theoric concepts and resulting impossibility theorems that are particularly mind-blowing. As a consequent game theorist, I classify my concepts because having a ranking of one’s preferences is always necessary to build your utility function, super useful in everyday life I swear! Anyway let’s dive in !</p>
<h2 id="1st-place-social-choice-theory">1st place: social choice theory</h2>
<h3 id="definitions">Definitions</h3>
<p>Social choice theory is the study of the question of how to aggregate the preferences of a group of individuals into a single ‘social preference’.</p>
<p>It is arguably a kind of branch of game theory that adopts in my view the most interesting methodology: the axiomatic approach. It is an inevitably normative approach. Typically social choice studies situations where one is confronted with a social problem (eg sharing a resource/common good/cost due to the use of a common good). The modeller attempts to design a collection of logically independent properties (called axioms) that are considered desirable (in an ethical sense). The modeller then finds a solution to the problem (e.g. an allocation rule), which must satisfy the above properties, and characterises the set of solutions satisfying these properties. This is precisely what makes social choice absolutely incredible: one cannot escape a political reflection on the type of social choice criteria to adopt.</p>
<p>The funny thing about the axiomatic approach is that it can be understood from both a consequentialist and deontological point of view! Indeed, the modeller seeks to ensure that his solution satisfies certain desirable properties and is therefore directly interested in the consequences that his solution will have from a collective point of view. The axiomatic method can also be understood from a deontological perspective since the modeller is quite free to choose a certain combination of axioms that he or she deems morally good (or not, for that matter) from a very arbitrary point of view without any necessary consequentialist consideration as long as a solution can satisfy all these different axioms at once.</p>
<p>To construct a choice function that associates every strict preference profile of individual preference relations with a social preference relation, we asked what properties such a function should satisfy. Surprisingly, this led game theorists to conclude that if there are at least three alternatives, seemingly natural and reasonable properties cannot hold unless the choice function is dictatorial! Social choice is the branch par excellence of game theory that formulates super badass impossibility theorems.</p>
<h3 id="main-results-and-figures">Main results and figures</h3>
<p>Arrow’s impossibility theorem <3 <br />
Gibbart-Satterthwaite’s impossibility theorem <br />
Condorcet’s paradox <br />
Other interesting stuff in voting theory (e.g. aproval voting)</p>
<h2 id="2nd-place-decision-theory">2nd place: decision theory</h2>
<h3 id="definitions-1">Definitions</h3>
<p>Decision theory is a field who analyzes the reasons and how individuals make their choice. A lot of people seem to confuse this concept with basically game theory more broadly so let me be more precise here: <br />
DT: involves a single decision maker, and for which the single decision is the only focus. <br />
GT: try to predict the behavior of individuals who are involved in a situation together.</p>
<p>The basis of decision theory is to define what economists call a preference relation. As I had already briefly explained in my blog post on rationality in economics, a preference relation must satisfy certain intuituve mathematical axioms to be called rational. We can then construct a <em>utility function</em> representation of preference relations over certain outcomes. It was named after the founding work of von Neumann and Morgenstern. One key feature of this utility function is that it is linear in the probabilities of the outcomes, meaning that the decision maker evaluates an uncertain outcome by its <em>expected utility</em>. To have the honour that your preference relation can be represented by a utility function in the sense of von Neumann Morgenstern, the latter must however satisfy certain mathematical properties (be rationnal and satisfies the von-Neumann Morgenstern axioms).</p>
<h3 id="suggestions-and-extensions">Suggestions and extensions</h3>
<p>However as I previously discussed in my previous blog post, the completeness axiom is a strong assumption because it means that you are always able to classify with 100% certainty any alternative between them. Agents that are not completely sure of the right thing to do/choose between two or several alternatives - which I believe is an accurate summary of the state of knowledge about ethics, both because of normative impossibility theorems (e.g the repugnant conclusion), and the practical difficulty of predicting the consequences of actions – are thus a bit stuck. To better take into account this (moral) uncertainty, one could imagine that preferences are partially ordered (rather than totally ordered in the mathematical sense of it) or are represented by probability distributions over total orders !</p>
<p>What is fun about decision theory is that most EAs heard about decision theory trough <em>Newcomb’s paradox</em> whereas it is not discuss at all in any decision theory course at the university. It is because decision theory is often the first thing to look at and then move on to game theory, which makes full use of this concept of utility function and maximising expected utility since the rational decision in economic theory is always to maximise one’s expected utility, whereas one could very well imagine that a rational decision can be defined in another way (cf. EDT vs CDT theory, timeless decision theory etc.).</p>
<h2 id="3rd-place-mechanism-design">3rd place: mechanism design</h2>
<p>Well it is great if everyone is super nice and have a rationnal preference relation but what if you’re super smart and realise that you can fool the other ones about your preferences in order to reach your goals more easily ? Mechanism design assumes that agents want to maximize their individual preferences and therefore can have an incentive to misrepresent preferences. A key question in mechanism design is whether an economic mechanisms can implement a social choice function under some game-theoretical solution concept if agents are self-interested, their preferences are private information, and they want to maximize their payoff. The informed reader will easily recognise that mechanism design problems can therefore be modelled using Bayesian games (from the basis of non-cooperative game theory, which is easily accessible).</p>
<h2 id="4th-place--network-games">4th place : network games</h2>
<p>One of the assumptions underlying all the situations we have discussed so far is that all players know each other from the start and are likely to interact with each other according to rules that we have mostly assumed to be common knowledge. What happens if we now remove this assumption? What if, for example, entering into relationships with others can be beneficial to everyone but also comes at a ‘cost’? To ask this question is to enter the world of strategic network formation and graph theory. Graph theory is a super fun mathematical concept we do not study that much in an economics degree ! That’s a pitty. Anyway physicists are here to save the honnor (I refer here to the following <a href="https://link.springer.com/content/pdf/10.1007/978-3-540-92267-4.pdf?pdf=button">book</a>).</p>
<h2 id="references">References</h2>
<p>The best game theory textbook for me is “Game Theory” by M. Mashler, E. Solan and S. Zamir (second edition). It is a rather academic book but the exercises are varied to understand and have the basics of both cooperative and non-cooperative game theory. I’ve seen too few game theory textbooks that fully integrate the cooperative part and this one is one of them (7 chapters dedicated exclusively to that!) so go for it! The only thing missing is a chapter on mechanism design… Otherwise for another textbook I also recommend “Strategy: An Introduction to Game Theory” by Watson. <br />
For a chill Sunday read I highly recommend to my French readers “La théorie des jeux” by Gael Giraud which gives very concrete examples, is very pleasant to read and is not extremely formalized mathematically for those who like it.<br />
Concerning ytb videos :</p>
<ul>
<li>the beautiful playlist « La démocratie vue sous l’angle de la théorie des jeux » from Science4All</li>
<li>youtube channel of Selcuk Ozyurt and William Spaniel</li>
</ul>
<p><strong>If you want to buy me a gift:</strong></p>
<ul>
<li>Set functions, Games and capacities in decision making (Michel Grabish, 2016)</li>
<li>Econophysics and Economics of Games, Social Choices and Quantitative Techniques (Banasri Basu, 2009)</li>
<li>The Complex Networks of Economic Interactions : Essays in Agent-Based Economics and Econophysics (Akira Namatame, Taisei Kaizouji, Yuuji Aruka, 2005)</li>
<li>Flowers and chocolates work as well</li>
</ul>Angélina Gentazangelina.gentaz@gmail.comGame theory is the name given to the methodology of using mathematical tools to model and analyze situations of interactive decision making. It is thus an incredible powerful tool to (i) predict the behavior of several decision makers and (ii) provide decision makers with suggestions regarding ways in which they can achieve their goals. Starting with this basic definition, the reader probably has his head in a twist now as the number of possible applications of this incredible tool are numerous, not to say infinite.Some praise and vilification of rationality in economics2022-11-24T00:00:00-08:002022-11-24T00:00:00-08:00https://angelinagentaz.github.io/posts/2012/08/blog-post-5<p>Often when I explain to someone what I do in my studies, i.e. mathematically formalised social and economic models based on the hypothesis of the rationality of agents, I am told that the concept of rationality is something outdated and that, consequently, models based on this hypothesis are not valid. I sometimes had a tendency, perhaps a little pedantic, to mechanically quote the famous statistician <em>George Box</em>: <em>‘All models are wrong but some are useful’</em>, which is always useful to bring out to shine in society. However, after discussing it, some reflections came up that I discuss here.</p>
<h2 id="rationality-for-economists">Rationality for economists</h2>
<p>In fact the concept of rationality in economics is an old but rather debated and controversial concept. A rather incredible theorem on which the whole of modern economic theory is based is the following: a preference relation can be represented by a utility function only if it is rational. For my non-economist friends, let us detail here what a rational preference relation is. Rational preferences over a set of alternatives, say $X$, are defined by a complete and transitive binary relation, say $\succeq$. Formally :</p>
<ul>
<li>(i) completeness: $\forall x,y \in X$, either we have $x \succeq y$ (i.e $x$ is at least as good as $y$) or $y \succeq x$ (i.e $y$ is at least as good as $x$) or both, that is $x \sim y$ (i.e I am indifferent between $x$ and $y$)</li>
<li>(ii) transitivity: $\forall x,y,z \in X$, if $x \succeq y$ and $y \succeq z$, then $x \succeq z$</li>
</ul>
<p><strong>This is an extremely strong definition.</strong> Indeed, the completeness axiom implies that an agent can always rank several alternatives among them, i.e. he always has somehow a relative opinion on each of the alternatives offered to him. As it happens, we do not always have an opinion on possible alternatives and it is difficult to make rankings, however approximate, of certain alternatives. Secondly, the axiom of transitivity is something that is not so often verified experimentally (cf. <em>Condorcet paradox</em>, 1785), surprising as it is since the intuition of this axiom is so trivial. We know, in fact, since <em>Allais</em> and his famous paradox (1953), that <em>homo sapiens sapiens</em> does not always make decisions in accordance with what orthodox theories predict. One of the ambitions of experimental game theory is to constitute a sort of data bank on the <em>real</em> behaviour of an individual that will allow the effective testing of alternative theories that theorists are likely to develop in order to compensate for the shortcomings of the theory of <em>von Neumann and Morgenstern</em> (the fathers of game theory who made a sort of remix of the above-mentioned theorem by integrating some other fancy axioms such as independence from irrelevant alternatives so that a stylish utility function can be found to represent a preference relation).</p>
<p>Now that the reader knows what a rational preference relation is, he or she will have no difficulty in realising that <strong>the implications of the above theorem are absolutely crazy</strong>. As a reminder, all modern economic theory, including game theory of course, makes extremely regular, if not systematic, use of these famous utility functions (for my non-econ friends, a utility function simply stands for the satisfaction of an economic agent). I invite the reader to stop reading this post for two minutes to realise the power of these assumptions.</p>
<p>The branch of the economy that is interested in analyzing the reasons and how individuals make their choice is called <strong>decision theory</strong>. However, social interactions between individuals are not <em>directly</em> studied in decision theory, and it is rather necessary to turn to the fabulous game theory to analyse these interactive decisions. A question that the reader will naturally ask is whether <em>Bentham</em>’s utilitarianism can provide a rational basis for the collective evaluation of decisions. This is indeed an interesting question and a fabulous economist by the name of <em>Keneth Arrow</em> thought pretty much the same by wondering whether it is possible to neglect the intensity of utilities and retain only their informational content of comparison as we have seen above (‘better than’, ‘worse than’). He demonstrates in his book <em>Social Choice and Individual Values</em> (1951), that if the evaluations of possible options by individuals are simple <strong>preferences, i.e. they are only qualitative and relative (in practice: a ranking of possible options) and therefore neglect the intensities</strong>, then collective choices cannot satisfy the criteria of rationality. Specifically, through <em>Arrow’s impossibility theorem</em>, he mathematically demonstrated that social preferences constructed by aggregating transitive binary relations are not always transitive! As a result, it is impossible to find a preference aggregation rule based on rankings alone that simultaneously verifies universality, unanimity and indifference to non-relevant alternatives, while being non-dictatorial. Again, the reader probably needs to pause for two minutes to realise the folly and genius of this impossibility theorem.</p>
<p>But let’s go back to what has been driving us since the dawn of time, namely game theory. I may be speaking with more jargon here (hoping it wasn’t already the case but on my first re-reading I unfortunately doubt it) and I apologise to the uninformed reader but rehashing a course in strategic game theory is not the objective of this blog post. In strategic games, something of the order of common knowledge (i.e. about the payoffs of each player in all possible configurations) is necessary for equilibria to emerge. But common knowledge potentially requires infinite computing power from each player. This is obviously not realistic. Just one example to illustrate my point : chess. By applying <em>backward induction</em>, it is theoretically possible to calculate a winning strategy for one of the two players or at least a strategy that guarantees a draw. The problem is that the number of nodes that need to be traversed within the tree structure induced by the extensive form of a chess game is of the same order of magnitude as the number of atoms in the universe! This is where <em>bounded rationality</em> comes in.</p>
<h2 id="bounded-rationality">Bounded rationality</h2>
<p>The main idea of the bounded rationality theory (from <em>Herbert Simon</em>) is that in a real situation, an agent is not able to find an optimal alternative or solution of a decision problem (e.g., chess, search for food by an animal, etc.) in the strict mathematical sense, because of limited resource (mainly time and energy, but also information, intelligence, memory, etc.), but the search for a solution will stop as soon as the agent has found a satisficing solution; ie, bringing a sufficiently high level of satisfaction. Quoting <em>Samuel Eilon</em> :</p>
<blockquote>
<p>Optimizing is the science of the ultimate, satisficing is the art of the feasible.</p>
</blockquote>
<p>Paradoxically - and tautologically - the way we model bounded rationality today may not be the best; nevertheless, it is already yielding satisfactory results, the importance of which economists will no doubt soon see in their modelling.</p>
<p>However, bounded rationality is not really the case for AI optimizers (I’m rather uncertain about the next few sentences, they reflect my current understanding of an optimizer but I may be wrong). The trouble with optimizers is that they will never be satisfied that their goals are 99% achieved. In fact even though it is only 99.9% certain that the goal is achieved, the AGI will restlessly try to increase the probability further – even if this means using the computing power of the whole earth to drive the probability it assigns to its goal being achieved from 99.9% to 99.91%. Perhaps it would be interesting to make future very powerful and optimising AI systems operate in the mode of bounded rationality in the Simonian sense. It could be something to work on as it could potentially lower the likelihood that an optimizer AGI could lead to catastrophic scenarios. Idk just droppin’ the idea but idk the technical tractability at all. However, careful readers might object that, in any case, powerful and complex AI systems such as TAI and AGI will not <em>necessarily</em> be well modelled as rational optimizers agents. For instance, black box AI models might be governed by a complex network of decision making heuristics that are not easily captured by a utility function to maximize.</p>
<p>Finally, to finish what I started in the first paragraphs of this blog post, I think it is appropriate to quickly go back to what economists generally think about this conceptual tool of the rationality of agents. Actually, most economists, even the most liberal ones, abandoned the myth of <em>homo oeconomicus</em> a few decades ago. The perfect rationality of agents has since been discarded (recall eg. <em>Allais paradox</em>) and even if many models are still based on this hypothesis, many sophistications have been implemented to better account for reality through, for example, taking into account uncertainty, risk aversion of agents, their belief about the state of the world (<em>Bayes</em>) etc. A closer look at the fabulous field of game theory is enough to show that rationality can be found in quite different ways and that it is not <em>necessarily</em> and simply understood in the sense of naive maximisation of the expected value or of the expected utility (if you don’t know the difference between the two : (1) omg it’s horrible (2) I suggest you take a look at the <em>St Petersburg paradox</em> to understand), but implied taking into account possible errors on the part of the opponents of each player, and taking this consideration itself into account in the calculations attributed to each of these opponents, as is the case, for example, in the definition of a perfect or sequential equilibria. Thus, the assumption of rationality should not be totally discarded simply because it is based on axioms that are not always representative of reality. For example, in voting theory, laboratory experiments show that agents tend to behave more like rational agents in the sense of <em>Von Neumann Morgenstern</em> (at least relatively more than in another situation in experimental game theory). As I mentioned above, most models in economics rely on some sophistication other than the simple rationality of agents to come as close as possible to reality.</p>
<p>However, the trouble is that mathematicians/economists are somewhat castigated for coming to unpleasant/irrealistic conclusions because the models they develop are based on reductive assumptions and therefore, by implication, these same mathematicians promote a model of the organisation of the social world based on a kind of Machiavellian cynicism. I think that there is a kind of confusion and intermingling between a positivist and a normative vision of what we are supposed to understand about economic models, whereas it seems to me that the two are more decorrelated than they seem. An example to illustrate my point. It is true that in classical <em>Newtonian mechanics</em> any isolated system tends towards a situation of equilibrium. Since mathematical economics borrows a substantial part of its analytical tools from physics, most of the dynamic theories it develops strive to show how an economic system should evolve towards a given equilibrium situation if the hypotheses we have used to describe it were correct. However, this does not mean that the equilibrium towards which our system would converge in the absence of any external intervention (by the State, for example) is desirable! I think this debate ultimately comes down to the question of <strong>whether rationality is merely instrumental, that is, agnostic about the logic of human action and its motivations (instrumental rationality) or does it substantially inform them (substantive rationality)</strong>. Instrumental rationality is simply another formulation of <em>Nick Bostrom</em>’s <strong>orthogonality thesis</strong>: an agent’s intelligence and its final goal, i.e. its utility function, are decorrelated. This is in contrast to the belief that, because of their intelligence, most intelligent agents always converge to the same goal.</p>
<p>That’s all for me here, thanks for reading up to this point!</p>Angélina Gentazangelina.gentaz@gmail.comOften when I explain to someone what I do in my studies, i.e. mathematically formalised social and economic models based on the hypothesis of the rationality of agents, I am told that the concept of rationality is something outdated and that, consequently, models based on this hypothesis are not valid. I sometimes had a tendency, perhaps a little pedantic, to mechanically quote the famous statistician George Box: ‘All models are wrong but some are useful’, which is always useful to bring out to shine in society. However, after discussing it, some reflections came up that I discuss here.My 4th anniversary of vegetarianism2022-08-31T00:00:00-07:002022-08-31T00:00:00-07:00https://angelinagentaz.github.io/posts/2022/08/blog-post-2<p>Life, and this is so much the better, is often much more comical than we imagine. Looking at those old photos of me eating meat to my heart’s content, who could have said at that moment that I would become a convinced anti-speciesist and vegetarian a few years later?</p>
<p><img src="/images/test.png" alt="photo viande" /></p>
<p>As of today, this is my fourth anniversary of vegetarianism and it’s a journey that I’ll explain throughout my all life so I thought it might be a fun idea to write a little blog post to explain my motivations, thoughts and pathways on the subject. Especially since I’ve evolved a lot on the subject, both in the way I explain it and in the way I approach it individually and put it into practice.</p>
<h2 id="death">Death</h2>
<p>The trolley dillema is a well-known thought experience in moral philosophy. Imagine you are driving a trolley and the track splits into two separate branches. You cannot stop the trolley and must choose to drive on one of the two tracks. Oh but wait! On the 1st track there are 1 random person who will die if you choose this one and on the 2nd track, 5 other random people who will have the same fate if you choose this 2nd track.</p>
<p><img src="/images/Picture1.jpg" alt="trolley dillema 1" /></p>
<p>What do you do? Well of course, 5 lives saved is better than 1 right so you probably just go on the 1st track. Now let me introduce you to some variations. Imagine now on the 1st track there is grandma and on the 2nd a baby (please appreciate my artistic talents on Paint, yes pretty wild isn’t it?).</p>
<p><img src="/images/Picture2.jpg" alt="trolley dillema 1" /></p>
<p>Well of course, you choose the 1st track again. These are very intuitive and simple thought experiences for the moment. The one I want to emphasize on in the following: now imagine on the 1st track a pig and on the 2nd nothing.</p>
<p><img src="/images/Picture3.jpg" alt="trolley dillema 1" /></p>
<p>What would you do in this case? Well most of people choose not to kill the pig so choose the track where there’s nothing. So why a large majority of people eat ham? Why kill the pig whereas you could not? As long as there is a vegetarian option, why not to take it? If not, why kill the pig since you can simply not by choosing something else than ham?</p>
<h2 id="death-vs-pain">Death vs pain</h2>
<p>Although this is one of the arguments that appealed to me at first sight, and this is where I usually start when I explain why I don’t eat meat, I have to say that it is not a correct argument in a purely philosophical and axiomatic sense. Let me now point out why I think so.</p>
<p>In fact, there is a moral distinction to be made between the issues of taking life and inflecting pain. From a purely theoretical point of view, the former does not necessarily imply the latter and vice versa. Let me underline why I see the issue of inflecting pain as something more accurate when it comes to say whether it’s a good thing to eat animals. I begin with a quote from Peter Singer:</p>
<blockquote>
<p><em>The capacity for suffering—or more strictly, for suffering and/or enjoyment or happiness—is not just another characteristic like the capacity for language or higher mathematics. The capacity for suffering and enjoyment is not only necessary, but also sufficient for us to say that a being has interests—at an absolute minimum, an interest in not suffering.</em></p>
</blockquote>
<p>In Peter Singer’s view, it is the capacity to suffer that is morally important. As a consistent utilitarian, I believe that the maximisation of the welfare of all, requires that of every sentient being, regardless of their intelligence. It should be made clear that this is not about arguing for similar treatment of humans and nonhumans, but about changing the way we perceive and treat the latter. It is not a question of ‘giving pigs the vote’, but of not despising the different but real interests of non-humans such as the interest of not to suffer.</p>
<p>The wrongness of killing a being is more complicated as evidenced by the debates around abortion and euthanasia. While self-awareness, the capacity to think ahead and have hopes and aspirations for the future, the capacity for meaningful relations with others and so on are not relevant to the question of inflicting pain—since pain is pain, whatever other capacities, beyond the capacity to feel pain, the being may have—these capacities are, I think, relevant to the question of taking life. For example, assume death does not imply suffering. Who would I save between a human and, say, a pig? That’s an interesting trolley dilemma case because it forces one to find other criteria than the capacity to suffer to determine who between the human and the pig should be killed. People often ask me this question because they tend to think that an anti-speciesist will answer that both have strictly the same value and that we should treat both the same way. This is a classic but common confusion between treating animals as moral patients with interests and treating animals and humans strictly equally when there is no question of inflicting suffering. But of course, I would prefer to kill the pig because, cognitively speaking, it has less than the average human and it is reasonable to think that cognitive abilities may in this case be the only variables to be taken into account in deciding whom to kill - provided only that death does not cause suffering and all other things being equal. However, pain and suffering are in themselves bad and should be prevented or minimized, irrespective of the race, sex, or species of the being that suffers.</p>
<p>Unfortunately, even with this new vision of not causing unnecessary suffering, the conclusion remains the same: stop eating animals. Indeed, assuming that your basic moral axiom is <em>‘I want to avoid inducing unnecessary suffering’</em> rather than <em>‘I want to avoid inducing unnecessary death’</em>, the system of animal livestock or killing as it currently stands induces suffering - and suffering that goes far beyond the local suffering that you can experience by, for example, banging your little toe against a piece of furniture. I use the qualifier ‘unnecessary’ because it is unnecessary for us humans to eat meat: the scientific consensus is now very firm on the issue and I refer to my references - more rigorously, I should say that it is counterfactually useless for us to eat meat (it is useful if and only if it is the only food available to us for our survival). To detail here the scope of animal exploitation in terms of induced suffering is beyond the scope of this blog post and I refer again to my references below for the curious and skeptical.</p>
<p>So when I stopped eating meat, this decision was nothing more than pure logic for me given the axioms involved in my moral system. I was still a child at that time but here is what I wrote down for myself in chart form:</p>
<table>
<thead>
<tr>
<th> </th>
<th> </th>
</tr>
</thead>
<tbody>
<tr>
<td>premise 1</td>
<td>To eat meat/fish you have to kill and most likely inflict pain to an animal (A)</td>
</tr>
<tr>
<td>premise 2</td>
<td>The interest of an animal to not suffer is essential (B)</td>
</tr>
<tr>
<td>premise 3</td>
<td>The interest of a human being in satisfying his taste for the flesh is incidental (C)</td>
</tr>
</tbody>
</table>
<p>The only thing missing is a moral axiom that I personally stated above (<em>I want to avoid inducing unnecessary suffering</em>). Well, the logical conclusion is pretty trivial I guess.</p>
<p>Finally, à la Ockham’s razor, one might reasonably think that the action that tends to minimise unnecessary suffering is simply to stop eating meat. A kind of paradox is that the action is, in itself, very simple: it is simply the fact to not buy meat (which is an action that does not require any hard skill so it is very easy to implement). But because of very hard inertia forces, the action to not buy meat – or, contraposely, the non-action to buy meat – is actually quite hard for a lot of people.</p>
<h2 id="even-if-neither-death-nor-pain">Even if neither death nor pain</h2>
<p>Let’s put aside purely philosophical considerations and let’s now put my economist’s hat and simply have a look at some consequences of animal livestock for society as a whole. Indeed, while the local and individual gain from eating meat may be positive - although probably marginal (it depends how the taste of the meat will bring utility to you) - the collective gain is significantly negative (i.e the consumption of animals brings a cost to society in the long term) due to very strong negative externalities from livestock farming.</p>
<p>But firstly, let us look at the indirect benefits of improving animal welfare, namely the altruistic utility gain of humans in making animals happy. As Romain Espinosa pointed out in his fabulous book ‘<em>Comment sauver les animaux: une économie de la condition animale</em>’, the main challenge in estimating the total surplus generated by animal welfare measures, in the same way as for environmental measures, is the absence of a market price - the end of live castration on farms is not a good that citizens can buy at the supermarket, from which we can infer their utility. One must resort to what economists call ‘<em>contingent valuation methods</em>’. In a study published in 2019, Richard Bennett and his co-authors use these contingent valuation methods to measure the public utility gain from the implementation of the EU Broiler Directive, which came into force in June 2010 and aims to improve the welfare of farmed chickens. This type of work paves the way for better public policy assessments of measures to improve animal welfare, taking into account the indirect utility gain in the social welfare function. Richard Bennett’s study concludes that the effects are very large. Very few public policies can boast such a rate of return on welfare for the population: for every euro invested in chicken welfare, the utility of the human population is increased by 23 euros. So this basically means that even if we take an instrumental view of animal welfare, the altruistic gains from improving the condition of farm animals can be very high and sometimes outweigh the costs of implementing the associated measures.</p>
<p>Secondly, as I said in the first paragraph, livestock farming induces strong negative externalities.</p>
<ul>
<li><strong>Livestock is a driver of global warming</strong>. According to the FAO, livestock as a whole is responsible for 14.5% of global greenhouse gas (GHG) emissions. This is slightly more than the direct emissions (fuel consumption) of the transport sector (IPCC, 2014). Livestock is a driver of deforestation. As a result of climate change and deforestation (65% of which is due to cattle ranching), much of the Amazon basin is now emitting CO2 instead of absorbing it. The weight of consumption choices was confirmed in a study published in May 2018 in the journal Science (Poore et al., 2018). Using data from 38,000 farms in 119 countries around the world, the study established the average environmental impact of producing 40 of the main foods consumed, based on 5 indicators, including GHG emissions and annual land area occupied. A scenario replacing the current diet with a 100% plant-based diet would reduce food-related GHG emissions by 49% and require 76% less land.</li>
<li><strong>Intensive livestock farms are incubators of pathogens</strong>. As an FAO report stated more than 10 years ago, ‘it is not surprising that three quarters of the new pathogens that have affected humans in the last ten years have come from animals or animal products’. Rohr et al. (2019 ) finds that, ‘since 1940, agricultural drivers were associated with >25% of all — and >50% of zoonotic — infectious diseases that emerged in humans, proportions that will likely increase as agriculture expands and intensifies’. Thus, intensive livestock farming creates conditions for the emergence and amplification of epidemics.</li>
</ul>
<p>So even if we ignore the purely philosophical arguments against eating animals, i.e. even if we say that the unnecessary death or suffering of animals is not in itself bad, we have just shown that (1) the improvement of animal welfare gives utility (i.e raise social welfare) (2) livestock farming, as it is currently implemented, cannot be considered a sustainable practice because of its strong negative externalities for society as a whole.</p>
<p>Furthermore, let me write quickly one of the advantages that is not often mentioned besides the fact that’s it’s good for your health (I will not elaborate on this since it’s pretty trivial but I give some reference at the end in case you may be sceptic): it basiccaly reduce your cognitive load when choosing what to eat. In fact, as you can experience in daily life when you’re a vegetarian in France, there are not that much plant-based alternatives, if you’re lucky you might find two tomato sandwich and a tabbouleh chasing each other in the takeaway section of the supermarket. So you do not spend your time on choosing if you’ll take the lasagne or the chicken salad or whatever (high entry cost because you have to change your habits but still a win-win situation in the long run since you do not have that much choice).</p>
<h2 id="its-totally-fine-not-to-be-100-vegetarian-or-vegan">It’s totally fine not to be 100% vegetarian or vegan</h2>
<p>Much of my argument therefore centred on the fact that ‘it is fundamentally not acceptable to do harm’. However, I think this is not a very good approach to convince people, even if, empirically, it was this argument that appealed to me personally in the 2nd place (after the trolley dilemma). Rather, I think the veg(etarien)an community should more emphasis on the fact that ‘it’s good to make compromises that make the world a better place at a lower cost’. So I usually put emphasis on the following question: ‘as long as there is a vegetarian option, why not to take it?’. <br />
Still, I really believe it is still ok if you do not want commit yourself to stop eating meat at all. There are many inertia mechanisms such as cultural/social norms and preferences that might make the entry cost quite high.</p>
<h2 id="appendix">Appendix</h2>
<h3 id="faq">FAQ</h3>
<p><strong>I get you! But plants can suffer too ;))))</strong> <br />
In fact, it has not been proven that plants feel (no nociceptors) and they have very low cognitive capacities because they have no nervous system. So if there is indeed a doubt allowed on the plant itself (doubt because it has not been proven) there is most probably none as for the fruits and vegetables that were made by the plant in order to be eaten (cf. biology courses in high school, seeds, evolution would have had absolutely no interest in integrating pain to the fruits). However, pain, just like fear or love, has a function for the survival of the species. Pain is probably useful for generating a rapid response, likely to provoke movement and escape. Pain therefore has no physiological meaning for the plant because it is immobile and there is therefore strong evidence that pain does not confer any advantage in terms of adaptive responses to its environment. Finally, on average, it takes 15kg of plant for 1kg of meat (which is not proportional to the expected intake to compensate 1kg of meat in plant protein). So if someone care about the alleged plant suffering, she should better stop eating meat as well.</p>
<p><strong>I get you! But the animal has already been killed so it does not matter whether I buy it or not ;)))</strong><br />
Note that the sunk cost fallacy does not apply when you just buy the meat. Boycott action is still valuable in a pure consequentialist point of view (I do not elaborate on this since it’s pretty trivial).<br />
Consider my own example again. I decided to stop eating meat when I was in high school and I used to eat lunch with my friends in the canteen of my high school. At the beginning of my first year of vegetarianism in the school’s canteen, food waste was sorted, i.e. vegetables were thrown away separately from meat. There must have been a reason for this so I thought that sorting the waste would allow the canteen staff to adjust the menus accordingly. For example, if a lot of meat is thrown away on a daily basis, I thought that the canteen staff could anticipate that (1) the meat as cooked is not very good and the staff will adjust the recipe accordingly, (2) the portions are too big and therefore need to be reduced and it is not necessary to buy as much meat. The last point is thus important form a consequentialist point of view because my (instrumental) goal was basically to ensure that the canteen buys less meat (and therefore potentially fewer animals suffer needlessly). However, one day the sorting of food ceased and the canteen put a grinder in its place: it was no longer possible to distinguish whether meat had been thrown away. Careful readers might have spot the difference that this grinder makes from a purely consequentialist point of view. In fact, since the canteen staff couldn’t see if meat has been thrown anymore, they couldn’t adjust the next menus accordingly or, more accurately, they could no longer tell themselves that they were going to reduce meat orders because of its waste. Ok but what’s the point? Well, this means that if any of my friends had taken meat but had bitten off more than he could chew and wanted to throw away his remaining meat, I didn’t mind eating the leftover at all because, from a consequentialist point of view again, it wouldn’t change anything at all whereas if I offered to finish their meat dish when there was no grinder, it might have had an impact because there would have been less waste of meat and therefore the canteen staff might not have adjusted the meat orders accordingly.
For the anecdote, as long as I can remember, that was one of my first ‘non-trivial’ utilitarian calculus (‘non-trivial’ because for a lot of people, the definition of a vegetarian stops basically at ‘well it’s not eating meat’ when in all rigour, stopping eating meat is not the terminal goal but only a possible instrumental goal, which is moreover not always necessary in a context of sunk cost fallacy as I have just presented it through this example).</p>
<p><strong>Are you vegetarian or vegan?</strong><br />
Well strictly speaking I’m vegetarian. I’m maybe like 80% vegan. Indeed I consider that the social cost of being vegan is very high for me and I prefer to go to the restaurant/eat out with my friends and take the vegetarian option rather than eating alone at home because the vegan option does not exist. I’m good at cooking at home without animal products (well in fact I often feel lazy to cook too much so I often take vegan dishes as a substitute for meat as I also like to support alternative protein initiatives) but when I have to buy take-away food, eat at my parents’ house who put butter everywhere, or want to have a good time with my friends, I eat vegetarian rather than vegan. So even if I’m convince of the validity of veganism and I rarely buy products with milk or eggs, I err on the side of saying that I am vegetarian also for the main following consequentialist reason: although vegetarianism is becoming increasingly popular in most western countries, people still tend to associate veganism as an extreme and radical point of view. Not being depicted as radical is useful for not losing ‘weirdness points’ so that I can emphasize for the philosophical arguments more easily and better advocate for this cause.</p>
<h3 id="references">References</h3>
<p>I have more to say on the subject but it would be beyond the scope of this blog post so I forced myself to stop here. In addition to that, I highly recommend the reading of:</p>
<ul>
<li><em>Animal liberation</em>, Peter Singer, 1974 (still on my top 3 of my favourite books of all time, as it was mind-blowing for me. Singer demonstrates almost axiomatically the validity of antispecism on the same philosophical ground of all other form of moral discrimination)</li>
<li><em>Comment sauver les animaux ? Une économie de la condition animale</em>, Romain Espinosa, 2021 (the most complete and amazing book for French readers on animal welfare economics. Even if you want to get an overview of the issue, this book is so well organized, you can have rapidly the info you want to get)</li>
</ul>Angélina Gentazangelina.gentaz@gmail.comLife, and this is so much the better, is often much more comical than we imagine. Looking at those old photos of me eating meat to my heart’s content, who could have said at that moment that I would become a convinced anti-speciesist and vegetarian a few years later?