In 2013 I was done with Utilitarianism. The “this is too demanding” objection was taking my bones, and I expressed it in a private message to some close friends.
In 2016 I realized that my behavior that attempts to help most others is a deeper and different part of my mind than the part that processes moral convictions, or moral intuitions. It is a preference. These two texts exhibit a kind of inescapability of the altruistic soul. Even after I dropped the moral duty card, after I dropped the moral requirement card, a few years later, I was writing about how I prefer to behave in a way that resembles a lot the utilitarian prescribed way, irrespective of my acceptance of utilitarianism as a moral theory. So here are these two reflections.
2013
I’ve had it with Utilitarianism.
I no longer am a Utilitarian.
Here is why:
I’m serious about things. I’m serious about doing what rationally is requested from me. Serious Utilitarianism, exemplified by “Astronomical Waste” (Bostrom) and rationalaltruist.com leads to extremely complicated reasoning, and unimaginably hard decisions.
Utilitarianism is frequently counter-intuitive to the extreme – making taking care of wild animals be a global priority for instance, or dominating the cosmos prior to another civilization so doing, or causing someone to have to change into a high earning career and donate nearly all one’s money to effective charities.
When reasoning in utilitarian fashion about my decisions, 99,99% ot time I end up concluding that that thing that I would do will be counterfactually irrelevant. Nearly everything I care about or read about, or know about or am able to do (and that is a large set of stuff) would be substituted without loss. My papers could be written by someone else. My future career, whatever it is, will be part of an economic ensemble that will not be changed in the least by my actions.
Every time I think of X, I think about X’s long-term effects on the world as a whole, and apply aggregative consequentialism to it, I find a reason that precludes me from Xing. Here are some examples: Videogames, tennis, couchsurfing, travelling, eating well, frisbee, writing what I want because I want to, writing philosophy, writing about sociology, writing about evolutionary psychology, making something Tim Ferriss style and getting a traveller’s lifestyle, taking my girfriend out, watching TV series with my friends. Learning to play a boardgame. Doing a masters. Doing a PHD.
Basically, anything that has less expected value than making a lot (10^8) of money to donate it to future people or presentpeople or veganism.
Everything that I have ever valued is thrown into the trash if I reason deep enough. Also, all my future choices are precluded from me, instead, I’m left with the duty of performing a calculation and implementing a Max algorithm in whatever the calculation results.
Utilitarian: But….
…oh, there is always a “but”, one of the prescriptions of a pragmatic utilitarianism is that you should not do the calculation if you wont really be able to summon the will to implement such harsh policy. Or in some cases, you can “allow” yourself some hours of socializing (the minimal amount) in order to keep your body going. Yet, that is not how it works in the end, if you are serious about it. If you are serious you’ll always catch yourself in the meta-question: “¿am I just before, or just after the threshold of need to relax and do my stuff? can I push just a little further and save just a tenth of that starving child, or would be posthuman?”. Utilitarianism is a ghost that haunts you, it haunts you into deeper crevices than the Christian God.
There is no limit to the utilitarian paralysis. You can go meta as many levels as you like, it is still a paralyzing, stressful condition. You’d think there could be some compromise. But there can’t, not for someone like me. I need my freedom back to relax.
So what am I changing utilitarianism for? Virtue Ethics? Kantian Ethics? No, just the same ethical theory I have always espoused. None. 90% of humans who live a fruitful and joyful life have no idea there are “ethical theories” “ethical imperatives” and “dispositional theorists of value”, I’m falling back into the mob.
Does that mean I’ll turn into an egotist? Or I’ll stop saying to people they should be utilitarians? No, and No. Only if I know someone whose commitment is being an extremist as much as mine is, being 8 or 80, nothing in between. To that person, and that person alone, I shall recommend to refrain from ethics. But I have known nearly none.
Utilitarianism, for me, was a prison made of stress, math, and the end of freedom of choice. I see why people need to be more utilitarian, just a little more. I don’t disagree. I just want to be out of my prison, I want to be in a psychological state in which my desire is not to suppress my pleasures, but to pursue them. Not to contain my excitement, but to display it. Not to harness my emotions into something productive, but to think that my emotions are important in themselves, qua emotions. I want to help other people and beings because that is awesome. I want to be kind because kindness makes my life good, and makes others lives good. I want to help the friendly Singularity because it is the most dramatic transformation that will ever happen, ever, ever. I want to live forever, and that humanity lives forever, because death is bad.
None of those reasons is the maximization of an algorithm. I have no goals whose shapes are mathematical entities of the Max(Arg) sort. None of those reasons is calculation based. None of those reasons is to increase a numerical value, or decrease another.
2016
Am I an Effective Altruist for moral reasons?
After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams’s book Ethics and The Limits of Philosophy. It’s a great book, and I’d be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. – but that seems irrelevant.
I could analyse it from the perspective of biology. – that also doesn’t seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. – Huh, that’s interesting, I guess my preferences, given I don’t know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. – this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don’t too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: “I am probably an aggregative consequentialist utilitarian.”
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I’m not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says “This is what is moral” and the part that says “I want there to be the most minds having the time of their lives.”
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, “positive utilitarian” badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite ofwhat is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim “utilitarianism is the right way to reason morally.” That deepened my understanding of morality a fair bit.
But I’d still pull that goddamn lever.
So much the worse for Morality.
Commentary
It seems clear from these two texts that I am not moved by moral knowledge as strongly as other people are, that is, upon realizing something to have a high probability of being a moral fact, this does not change my behavior substantially, and it also seems clear that I have very strong altruistic inclinations that come from an origin that is not moral reasoning, but some other drive. This drive seems to be pan-reflectionally stable (stable under reflection in many perspectives, including reading Bernard Williams or Peter Singer or Joshua Greene) and therefore to be robust to manual mode thinking, unlike the deontological judgements people make on the trolley problem, which tend to not survive manual mode reflection.
For lack of a better term, I will call this inescapable altruism, until I better understand it.
Fort those of a more nuances philosophical inclination, here is the discussion that Fabiano and I had about the second piece.
Diego_Caleiro 16 February 2016 11:38:51PM * 1 point [-]
I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism is right, and “just a reason” if it isn’t. But if not what is your reason for doing it?
My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.
If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable.
Appearances deceive here because “that I should X” does not imply “that I think I should X”. I agree that if both I should X and I think I should X, then by doing Y=/=X I’m just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X. I translate
I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.
In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not
Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.
We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.
Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures.
In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.
Perhaps you are not really that sure maximizing welfare is not the right thing to do.
Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.
I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.
One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.
The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution.
Seems plausible to me.
Incommensurable fundamental values are incompatible with pure rationality in its classical form.
Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.
It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.
I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don’t get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.
Parent Reply Permalink Edit
Retract
joaolkf 17 February 2016 03:34:55PM * 1 point [-]
My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but as the thing that I want. They are more like my desire for a back massage than like my desire for a better world. A function from my actions to my reasons to act would be partially a moral function, partially a prudential function.
That seems about right under some moral theories. I would not want to distinguish being the recipient of the utilitarian good and getting back massages. I would want to say getting back massages instantiate the utilitarian good. According to this framework, the only thing these prudential reasons capture not in impersonal reasons themselves is the fact people give more weight to themselves than others, but I would like to argue there are impersonal reasons for allowing them to do so. If that fails, then I would call these prudential reasons pure personal reasons, but I would not remove them from the realm of moral reasons. There seems to be already established moral philosophers that tinker with apparently similar types of solutions. (I do stress the “apparently” given that I have not read them fully or fully understand what I read.)
Appearances deceive here because “that I should X” does not imply “that I think I should X”. I agree that if both I should X and I think I should X, then by doing Y=/=X I’m just being unreasoable. But I deny that mere knowledge that I should X implies that I think I should X.
They need not imply, but I would like a framework where they do under ideal circumstances. In that framework – which I paraphrase from Lewis – if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge). If I value it, and if I desire as I desire to desire (which wouldn’t obtain in moral akrasia), then I will desire it. If I desire it, and if this desire is not outweighed by other conflicting desires (either due to low-level desire multiplicity or high-level moral uncertainty), and if I have moral reasoning to do what servers my desires according to my beliefs (wouldn’t obtain for a psychopath), then I will pursue it. And if my relevant beliefs are near enough true, then I will pursue it as effectively as possible. I concede valuing something may not lead to pursuing it, but only if something goes wrong in this chain of deductions. Further, I claim this chain defines what value is.
I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.
I’m unsure I got your notation. =/= means different? What is the meaning of “/” in “A/The…”?
In your desert scenario, I think I should(convolution) defend my self, though I know I should (morality) not.
I would claim you are mistaken about your moral facts in this instance.
We are in disagreement. My understanding is that the four quadrants can be empty or full. There can be impartial reasons for personal reasons, personal reasons for impartial reasons, impartial reasons for impartial reasons and personal reasons for personal reasons. Of course not all people will share personal reasons, and depending on which moral theory is correct, there may well be distinctions in impersonal reasons as well.
What leads you to believe we are in disagreement if my claim was just that one of the quadrants are full?
In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.
I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.
Most of my probability mass is that maximizing welfare is not the right thing to do, but maximizing a combination of identity, complexity and welfare is.
Hence, my framework says you ought to pursue ecstatic dance every weekend.
One possibility is that morality is a function from person time slices to a set of person time slices, and the size to which you expand your moral circle is not determined a priori. This would entail that my reasons to act morally only when considering time slices that have personal identity 60%+ with me would look a lot like prudential reasons, whereas my reasons to act morally accounting for all time slices of minds in this quantum branch and its descendants would be very distinct. The root theory would be this function.
Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.
Seems plausible to me.
If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.
Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.
It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).
I would look for one I can accept if I was given sufficient (convoluted) reasons to do so. At the moment it seems to me that all reasonable people are either some type of utilitarian in practice, or are called Bernard Williams. While I don’t get pointed thrice to another piece that may overwhelm the sentiment I was left with, I see no reason to enter exploration stage. For the time being, the EA in me is peace.
I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.
I found this usefulI didn’t find this usefulParent Reply Permalink
Subscribe
Diego_Caleiro 17 February 2016 10:20:06PM * 0 points [-]
They need not imply, but I would like a framework where they do under ideal circumstances. In that framework – which I paraphrase from Lewis – if I know a certain moral fact, e.g., that something is one of my fundamental values, then I will value it (this wouldn’t obtain if you are a hypocrite, in such case it wouldn’t be knowledge).
I should X = A/The moral function connects my potential actions to set X. I think I should X = The convolution of the moral function and my prudential function take my potential actions to set X.
I’m unsure I got your notation. =/= means different? yes What is the meaning of “/” in “A/The…”? same as person/persons, it means either.
In what sense do you mean psychopathy? I can see ways in which I would agree with you, and ways in which not.
I mean failure to exercise moral reasoning. You would be right about what you value, you would desire as you desire to desire, have all the relevant beliefs right, have no conflicting desires or values, but you would not act to serve your desires according to your beliefs. In your instance things would be more complicated given that it involves knowing a negation. Perhaps we can go about like this. You would be right maximizing welfare is not your fundamental value, you would have the motivation to stop solely desiring to desire welfare, you would cease to desire welfare, there would be no other desire inducing a desire on welfare, there would be no other value inducing desire on welfare, but you would fail to pursue what serves your desire. This fits well with the empirical fact psychopaths have low-IQ and low levels of achievement. Personally, I would bet your problem is more with allowing to have moral akrasia with the excuse of moral uncertainty.
I don’t think you carved reality at the joints here, let me do the heavy lifting: The distinction between our paradigms seems to be that I am using weightings for values and you are using binaries. Either you deem something a moral value of mine or not. I however think I have 100% of my future actions left to do, how do I allocate my future resources towards what I value. Part of it will be dedicated to moral goods, and other parts won’t. So I do think I have moral values which I’ll pay high opportunity cost for, I just don’t find them to take a load as large as the personal values, which happen to include actually implementing some sort of Max(Worldwide Welfare) up to a Brownian distance from what is maximally good. My point, overall is that the moral uncertainty is only part of the problem. The big problem is the amoral uncertainty, which contains the moral uncertainty as a subset.
Why just minds? What determines the moral circle? Why does the core need to be excluded from morality? I claim these are worthwhile questions.
Just minds because most of the value seems to lie in mental states, the core is excluded from morality by definition of morality. My immediate one second self, when thinking only about itself of having an experience simply is not a participant of the moral debate. There needs to be some possibility of reflection or debate for there to be morality, it’s a minimum complexity requirement (which by the way makes my Complexity value seem more reasonable).
If this is true, maximizing welfare cannot be the fundamental value because there is not anything that can and is epistemically accessible.
Approximate maximization under a penalty of distance from the maximally best outcome, and let your other values drift within that constraint/attractor.
Do you just mean VNM axioms? It seems to me that at least token commensurability certainly obtains. Type commensurability quite likely obtains. The problem is that people want the commensurability ratio to be linear on measure, which I see no justification for.
It is certainly true of VNM, I think it is true of a lot more of what we mean by rationality. Not sure I understood your token/type token, but it seems to me that token commensurability can only obtain if there is only one type. It does not matter if it is linear, exponential or whatever, if there is a common measure it would mean this measure is the fundamental value. It might also be that the function is not continuous, which would mean rationality has a few black spots (or that value monism has, which I claim are the same thing).
I was referring to the trivial case where the states of the world are actually better or worse in the way they are (token identity) and where another world, if it has the same properties this one has (type identity) the moral rankings would also be the same.
About black spots in value monism, it seems that dealing with infinities leads to paradoxes. I’m unaware of what else would be in this class.
I know a lot of reasonable philosophers that are not utilitarians, most of them are not mainstream utilitarians. I also believe the far future (e.g. Nick Beckstead) or future generations (e.g. Samuel Scheffler) is a more general concern than welfare monism, and that many utilitarians do not share this concern (I’m certain to know a few). I believe if you are more certain about the value of the future than about welfare being the single value, you ought to expand your horizons beyond utilitarianism. It would be hard to provide another Williams regarding convincingness, but you will find an abundance of all sort of reasonable non-utilitarian proposals. I already mentioned Jonathan Dancy (e.g. http://media.philosophy.ox.ac.uk/moral/TT15_JD.mp4), my Nozick’s Cube, value pluralism and so on. Obviously, it is not recommendable to let these matters depend on being pointed.
My understanding is that by valuing complexity and identity in addition to happiness I already am professing to be a moral pluralist. It also seems that I have boundary condition shadows, where the moral value of extremely small values of these things are undefined, in the same way that a color is undefined without tone, saturation and hue.