Maybe-Mathematical Musings — marcusseldon: jadagul: ozymandias271: ...

1.5M ratings
277k ratings

See, that’s what the app is perfect for.

Sounds perfect Wahhhh, I don’t wanna
marcusseldon
nostalgebraist

Chin-stroking anti-EA article: “but this ignores that morality is not just some bullying voice from the heavens, it is part of our own quest to self-actualize as the unique humans beings with particular cares and bonds we are”

Me: “yes, and it’s a lot harder to self-actualize when you have malaria, and sometimes one of your particular cares is helping out other people with their own quests”

marcusseldon

The chin-stroking person is right: if you’re a real utilitarian then thee is no room to self-actualize and have particularistic attachments and be a morally good person. The only good act under utilitarianism that which maximizes utility. Self-actualizing, having human relationships, etc. instead of working a second job to donate more money to AMF is bad under utilitarianism. Under utilitarianism, there are no good people alive and never have been. This is not uncharitable, this is actually what utilitarianism is.

Which is why I believe there is no such thing as a sincere utilitarian, not even Peter Singer, who spends/spent a great deal of money for elderly care for his mother instead of donating it to charity.

Note, though, that I’m talking about utilitarianism (which EA endorses as far as I”m aware), not consequentialism more broadly. Some EAs are not utilitarians, but most seem to identify that way.

nostalgebraist

This isn’t utilitarianism as I understand it.  For one thing, the act that maximizes utility is the best act, not the only good act.  It would be the only good act if we define “good act” as “the best one possible under the circumstances,” but this seems counter-productive, for exactly the reasons you give, and in practice I don’t see utilitarians make this claim very often (if ever?).

But, and more importantly, we live under various constraints: living a life of constant toil without particularistic attachments is extremely stressful and unpleasant in a way that will lower our cognitive abilities, make it harder to do any given job, and make it harder to morally evaluate the further tradeoffs we will face.  (It’s a common experience to be involved in a situation that is obviously awful, perhaps morally, in retrospect, but in which one was so over-worked or sleep-deprived that this wasn’t clear in the moment.)

There are some particular examples in which this becomes obvious: even a utilitarian moral saint is not going to take an extra night job if would mean they literally have no time to sleep, ever.  Less clear-cut but more important is the fact that most people (I think) find it easier to be productive workers, and to make moral tradeoffs, when they have fulfilling human contacts and various other “particulars.”

waystatus

But EA only makes sense if you want to do the best thing and not merely a good thing. Why spend all that money on malaria when you can also do some amount of good by giving much less money to homeless people nearby? If you’re happy as long as you’re doing any good thing there’s no particular reason to care about helping people in the most effective manner.

But if you do care about doing the best thing, or even the best possible thing, then that does imply that instead of caring about your own relationships you should donate more money to malaria. You should, specifically, do things to stop malaria until the point where doing those things causes you to stop doing things to stop malaria. Where this point is is different for different people, but I’m quite sure most EAs aren’t at it.

nostalgebraist

My own motivations for EA aren’t these ones, and in general I think these kinds of arguments for EA create needless guilt and confusion without leading to more effective action (and probably leading to less).  This isn’t to say that the EA movement doesn’t make this argument – it does, sometimes – but that there is a version of EA, espoused by some actual EAs, that doesn’t need it.

My version of EA is more like this: one of the things I care about is helping people through unusually terrible circumstances, like getting malaria or not being able to pay bail on a misdemeanor when they would show up for court anyway.  This isn’t the only thing I care about, and even if it were, I wouldn’t care about it to the exclusion of my mental health etc. because that’s self-defeating (cf. the extreme example of taking on enough jobs that you have no time to ever sleep).

But within my pre-existing set of cares, cause prioritization already makes sense, without needing some particular fetish for doing the absolute best thing.  All I need is to care more about helping people when they are in more dire straits, which I already do.  If I’m setting aside some fraction of my budget for helping people in dire straits, I’d prefer to use it to help people in as dire straits as possible.

This is not some strange philosophical add-on to my emotions/intuitions, but the same sort of thinking I use everywhere else in my life – for instance, if one of my friends is going through a tough breakup and needs a shoulder to cry on, and another one is doing a home improvement project they can do themselves but would be easier with a helping hand, I’m going to spend my time with the former and not with the latter (who will probably understand and agree, if I explain the situation).

One could propose that the “logical conclusion” of this is to spend all my time helping people in the direst straits possible, such that I should ignore both friends if I could do something else with the time that would help people with malaria (by some “equal or greater amount”).  But that, unlike any of the above, isn’t emotionally natural for me at all, which is a real difference.

ozymandias271

A possible analogy: consider a writer who wants to write as many of the best novels possible. She probably puts less effort into her writing than she could; she isn’t at the point where any more time spent writing would make her books worse. But it is silly to object to her “you went out to dinner with your husband instead of putting in three more hours researching Icelandic sheep herding! You might as well just write Dan Brown books.”

In pretty much any area of life, we accept that it is possible to (a) care about quality and (b) not put in the absolute theoretical maximum amount of effort you could to the exclusion of everything else you care about. This is also true of EA.

Furthermore, donating to AMF is an extremely cheap way of fulfilling my values, as opposed to giving to nearby homeless people. $3000 is enough to support a homeless person for about a hundred days, but it purchases more than fifty years of life if used for malaria nets. So donating the money to AMF instead of homeless people is equivalent to raising your donations from $1000 to $150,000– and significantly less painful.  

indiveren

Ok sorry, but I want to pick on details to clear something up, this went from the first post having it said as “chin stroking anti-EA” person the second person immediately jumps from EA to utilitarianism as the term being discussed. From that point out within this entire thing, as far as I can see, EA and utilitarianism is used interchangeably.  Does this mean you can’t take utilitarianism as a moral stance unless you are also actively EA?  … also if utilitarianism actually is supposed to be based on actively doing the best/good act you can do then what is the proper term for someone who simply takes the moral ground that the morally good thing is for people to feel happyness,  causing other people pain is morally bad, and ect. ? 

ozymandias271

I think that if you’re a utilitarian and not an EA you are probably deeply confused, but there are lots of EAs who aren’t utilitarians. 

Utilitarianism ranks acts from best to worst; pure utilitarianism doesn’t have a concept of “morally obligatory” or “morally non-obligatory but praiseworthy”, which has led some people to conclude that under utilitarianism being maximally good is morally obligatory. I think this is silly. I think the thing you mention is utilitarianism, or possibly some other form of hedonic consequentialism.

jadagul

I usually hear utilitarianism rendered as “you ought to do the thing which maximizes utility” or “it is morally obligatory to maximize utility.”

In particular, I’m not sure I’ve ever heard a rendition of the Singer drowning-child argument that didn’t include the statement that “you are obligated to save the child, and thus you’re obligated to save anyone else at a similar tradeoff-rate.”

Hm, so I decided to skim some articles on utilitarianism and I think I can state the issue more precisely. I think people use utilitarianism (and “utilitarianism”) in two different ways. One way, which you’re describing, is as a theory of the good: utilitarianism describes what sort of situations are “good” situations and what sort are “bad” situations. (I think it fails at this but that’s unrelated to this issue).

Other people are looking at utilitarianism as a theory of morality: it tells you what you ought to do. And viewed that way, it’s incredibly demanding and restrictive. (And incredibly “should”-y, which you wrote a lovely post about the problems with).

marcusseldon

Yeah, this conflation happens. It’s the difference between axiology (theory of goodness) and normative ethical theory.

Within academic philosophy, however, “utilitarianism” always is meant in the latter sense.

jadagul

That’s what I’d thought but Wikipedia, IEP, and SEP all describe it as a theory of the good first.

ethics utilitarianism ea cw

See more posts like this on Tumblr

#ethics #utilitarianism #ea cw