Utilitarianism is a moral philosophy which claims that the morality of an action is defined by the happiness it is expected to produce for humanity. In this article, I explain how and why I came to believe in this philosophy in the first place but then changed for deontological ethics.
What is utilitarianism?
Utilitarianism is a moral philosophy, which means that it is a theory of what is morally right and wrong. The idea behind it is very simple: we all have the intuition that making people happy is good, and making people suffer is bad. Utilitarianism is an extrapolation of this idea by saying that globally, a world in which people are happier is better. The concept of morality is then derived from this by saying that the moral action in a situation is the one which produces the most happiness for everyone.
Even before learning about philosophy, I had of course moral intuitions and rules which I believed in, like “killing is wrong”, “stealing is wrong”, etc. When I learned philosophy in high school 15 years ago, I was amazed to discover Kantian ethics which claims that morality is about following rules (like “you shall not kill” and so on) and that there is a logic to justify the “correct” rules.
However a few years later, when I was studying Math, Computer Science and Biology at the university, I discovered the utilitarian philosophy, and read some of the work of Peter Singer (the most famous contemporary utilitarian). I was convinced by utilitarianism because I had the feeling that it was more “rational” than Kantian ethics. To be more precise, I think I came to believe in it for three main reasons:
As a student with a math background, I found it quite appealing because it can easily be formulated in mathematical terms by saying there is a function, called the utility function, which measures global happiness. Think of it as a big imaginary counter which displays a number measuring how happy humanity globally is. The moral action in a situation can then be “calculated” by finding the action which makes the “utility function” the highest possible.
It is a very simple theory, and scientists like simple theories, because in science we try to find the most simple theory which explains all observations.
Kantian ethics gives little advice on how to think about animals, and as a convinced evolutionist, I was not satisfied with a moral which could not be applied to animals.
All this led me to becoming a convinced utilitarian 10 years ago, and I even played around with it by trying to formalize it mathematically. But now, 10 years later, I don’t believe in it anymore, and I will try to explain why in this article.
A legitimate goal
Let’s start by saying that I think the goal of utilitarian philosophers is legitimate and useful. It is in some way similar to scientific thinking. We start from an observation, which here is a moral intuition “making people happy is good, making people suffer is bad” and we extrapolate it to a “model” of ethics saying “The morality of an action is defined by how much happiness it brings to humanity”.
Having a theoretical moral philosophy can be really useful, because it allows us to question moral or legal rules which might lack justification. If we can build a model that matches most of our moral intuitions but makes some morals appear as strange, it might give an indication that the moral rule in question is incorrect. Jeremy Bentham, the founder of utilitarianism, called for the abolition of slavery and physical punishment, and in general advocated for fundamental freedoms. Even more suprisingly, he wrote in 1785 that homosexuality should be decriminalized, in a time when this was not questioned at all. Theorizing what morality was allowed him to realize that some things people took as moral were in fact immoral (like slavery) or the opposite (like homosexuality).
What I believe now
My current beliefs in morality are more similar to the idea of Kantian ethics that I used to believe in before switching to utilitarianism (so I came back to these ethics). Kantian ethics is a “deontological” moral system, which basically means that it consists of a list of rules that everyone should follow to act morally. The complex thing is then of course to choose the correct rules. The famous German philosopher Immanuel Kant wrote that good rules are those which are universalizable, meaning that they are rules you would want other people to apply:
Act only according to that maxim by which you can, at the same time, will that it should become a universal law. [Immanuel Kant]
Let’s think about it just a little more. Let’s take a simple example like theft. When considering whether to steal something, imagine there are clear benefits for you and you are unlikely to get caught. Kant would then ask you: “Would you like to live in a world in which everyone could steal whatever they want? You certainly wouldn’t, and you would like people to generally respect others’ property, and in an ideal situation you would like to be the only thief in the world so that people don’t steal things that you have. So you see, the action you are considering is wrong.”
See also my article on the alternative method proposed by John Rawls to define moral rules.
With this line of reasoning, we can deduce that there are some rules or “categorical imperatives” (in Kant’s terminology) like:
Don’t kill
Don’t hurt innocent people
Don’t steal
Don’t enslave people
Respect your promises
Don’t bear false witness
I have listed there the rules that we can justify with this logic, but this list is not exhaustive. It might remind you of the Ten Commandments, and indeed Kant’s goal was to be able to rationally justify these kinds of moral rules without relying only on saying “The Bible says so” (or any other call to moral rules defined by a religion).
In this system of values, people don’t have a moral obligation to make other people happy. It would be a good action, but not a mandatory action (this is called a “supererogatory” action). However, if you want to do positive good to other people, you can:
Try to prevent other people from doing morally wrong things to other human beings ie. trying to reduce immoral actions already going on but not by your fault.
Try to increase the happiness of existing people. ➡ Here I think utilitarianism can legitimately be used, in this kind of “bonus but not compulsory” way.
If you like to think with a mathematical model, you could say you have two “scores”:
An “morality score” which represents how much you follow moral obligations and prohibitions. ➡ It’s a moral obligation to get a good score here.
A “goodness score” in which you score by reducing evil things in the world or by improving global happiness. ➡ It is not a moral obligation to “score points” on this one, but doing so is a good action.
What’s wrong with utilitarianism?
Note that I am talking here about “act utilitarianism” which is basically the “regular” kind of utilitarianism, as advocated by famous utilitarians like Peter Singer. My critics don’t all apply to “rule utilitarianism”, which is a variant form of utilitarianism which is actually a “mix” between utilitarianism and deontological ethics.
Evil actions are badly accounted for
Let’s consider an imaginary person, Alice. She never does anything “evil” like killing or stealing. But she never gives any money to charities, although she has an average salary and could if she wanted to. Now consider Carol, who killed her neighbour but never got caught. She then gives $ 10,000 to a charity which can save an African child from starvation for every $ 1,000. Carol has therefore saved 10 lives and destroyed 1, so she has saved a net total of 9 lives. Alice has done nothing on the other hand, so has a net impact of 0 saved lives. According to utilitarian reasoning, this makes Carol a better person than Alice, which clearly seems wrong.
In my opinion Alice is a good person because she does nothing wrong. But Carol is an evil person because she commits murder. She gives some money to save other people, but she does not redeem herself by giving to charities, and it certainly does not make her an even better person than Alice. This is consistent with the ethics I proposed.
To be honest, utilitarian philosophers don’t say in practice that it’s OK to murder people. They would point out that murdering people would diminish people’s trust in each other and create a lot of anxiety, which would in the end destroy a lot of happiness. This basically makes the “cost” to kill Carol’s neighbor higher (we should could more than “1 destroyed life”), but the reasoning still applies that if enough people are saved by Carol, it would still outweigh it.
Let’s all be slaves of global utility!
Utilitarianism does not only require you to follow some moral rules, it asks you to make all your decisions to maximize utility. Peter Singer, probably the most famous utilitarian alive, explains that you should basically give all your annual income above what is necessary to live (around $ 30,000) to charities. As he states it “Again, the formula is simple: whatever money you're spending on luxuries, not necessities, should be given away”.
To convince us, Singer makes an analogy with a man who spent all his savings to buy a Bugatti (an expensive car) but is then put in a situation where he needs to sacrifice it in order to save a child from being killed by a runaway train. According to Singer, it is intuitively true that the man is morally required to do so, and therefore, as the situation is similar, we are morally required to give all our money beyond necessities to charities.
But wait... money is just a medium of exchange that you get in exchange for your work. So basically, earning $ 50,000 a year requires working more than to earn $ 30,000. So all this extra work would only be for charities, if you follow Singer’s advice. But then, if everyone believed in utilitarianism and did that, would people still be motivated to work harder than to get the $ 30,000 that they are morally allowed to keep? Only a very few people would, I think, typically those who have already dedicate their life to working in charities. Actually, even before considering which job to choose, you should choose the one that allows you to give the most to charities! So if you follow this logic, all your choices should be decided by considering global utility. You are basically a slave for global utility.
So it seems to me that if we really behaved “morally” as Peter Singer wants, we would end up destroying the economic incentive to work hard, and our economy would collapse because everyone would become demotivated. No one could either choose a job purely because of the pleasure it will give, but for the more money it earns in order to redistribute as much of it as possible. But that would certainly reduce global happiness a lot, which is bad from a utilitarian point of view. This seems paradoxical: behaving in an utilitarian way would in fact be bad for the utilitarian goal? This is actually the corollary of Adam Smith’s observation that a lot of good is created by selfish behavior: if people are never allowed to behave selfishly, they end up destroying all this good. We can either conclude that there is no problem with utilitarianism and it’s just Singer’s reasoning that is plainly wrong (too bad for the world’s most famous utilitarian!), or that utilitarianism itself is a poor guide for everydays’s actions.
More generally, utilitarianism is a philosophy which makes every decision you make immoral if it does not maximize global happiness. When you think about it, this is really strange, because basically everything which is good becomes mandatory in this philosophy. There is no notion of “this is a good action but it is not immoral not to do it”. This is called the “Demandingness objection” if you want to learn more about it.
Unlike Singer, I believe that we have absolutely no obligation to give to charities to save people. Ayn Rand, for instance, offers the radical opposite of Singer’s view, by saying that we actually should act in a selfish way, as long as we respect other people’s rights. See the notion of ethical egoism for more information. I think that this is a rather extreme view in the opposite direction. However, I agree with her that we have no moral obligation to help other people as long as we have to sacrifice something for it, as time or money.
According to my view (and in Ayn Rand’s ethics), in Singer’s story with the car, the man has no moral obligation to sacrifice his car. Of course, he can still do it for selfish reasons in practice, like getting the reputation of being a hero, being proud of what he has done for the rest of his life, avoiding the horror of seeing a child die, avoiding the remorse of not having helped while he was the only one who could have, etc.
The purpose of morality is to teach you, not to suffer and die, but to enjoy yourself and live. [Ayn Rand]
Note however, that my line of reasoning is valid only for problems which we are not responsible for. Global warming for instance is a consequence of our own actions, so I think we do have some responsibility for the consequences it has, even far away in the world.
See also: my more recent article “A moral philosophy for free people” in which I present, among other things, the compromise proposed by John Rawls on the duty of mutual aid.
What if our intuitions are wrong?
You might have noticed that all my arguments are in the form “If we assume utilitarianism, then it leads to consequence X, and X is intuitively wrong”. You might say “maybe it’s our intuitions that are wrong” (it seems that it is what Peter Singer thinks). And it might indeed be the case. If the moral model was made to perfectly match all our intuitions, it would be useless, because we could just use our intuitions directly instead.
But remember that the “proof” of utilitarianism is just an extrapolation of our intuition that “making people happy is good, making people suffer is bad”. So if all intuition is useless, why believe utilitarianism at all? Basically, I think it is reasonable to accept a model (a moral philosophy) that matches most our intuitions, and to question whether our intuitions are correct for the cases when it does not match (like what Bentham did with homosexuality). But if the model gives results which we find really shocking for a lot of cases, I think we should question the model instead of giving up all our intuitions.
Note that if you really believe that intuition does not matter at all, and you believe in utilitarianism simply as an act of faith, then I cannot say anything to convince you that it’s wrong. In that case, the best I can say is that I don’t share this faith.
I have explained here two of the four major reasons why I have rejected utilitarianism. There are two major other reasons, but this article is already very long, so I will stop here. One of the two other reasons is the classic trolley problem, in which utilitarianism gives a solution which most people (including me) see as immoral. You can look it up yourself if you are curious, but I will not write about it because it has been covered a lot (and you can find dozens of YouTube videos about it). The last subject is about average vs total utilitarianism, and I think it is “original” enough so that I can write about it later.
Overall, all this has convinced me that rule-based Kantian-type ethics match our intuitions a lot better and are a better guide to moral action in practice.
Would you not concur the intuition of which you speak is just a product of your socialisation which in turn is a product of thousands of years of religious indoctrination having residual effects on societies which have been secular for a femtosecond (comparatively). It's not a coincidence that your intuition aligns more with a more deontological framework.
In other words, how can you rest your evaluation of a moral system on intuition largely formed by a different moral system.
EDIT: just read your article again, I disagree with the origin of utilitarianism to ~some~ extent. Would you not also agree it's at least more provable a priori than Kantian logic.