Hacker News new | past | comments | ask | show | jobs | submit login
Sam Bankman-Fried and the effective altruism delusion (newstatesman.com)
98 points by ironyman 6 months ago | hide | past | favorite | 209 comments



This “expected value” way of thinking about income and philanthropy was inherently flawed - it didn’t take into account the risk of preference shifting. If you do good now to do good later, and later you change your mind, you’ve still done good. If you smear shit now to do good later, and later you change your mind, you’re just left with a lot of smeared shit.

What are the odds of a 50 year old having the same moral framework that they did when planning their future at 20? I rank it as rather low by default, not factoring in the corrupting influence of mega-wealth, or the road taken to achieving it.


The Catholic church has an explicit teaching on this, which is, "you may not do evil so that good may come from it." Suggesting, as many already know/suspect, that ends vs means is a pretty classic philosophical question.


This is one of the values of religion. Whether you believe in an omniscient man in the clouds or not, the practical day-to-day rules to live by gives people answers to these sorts of philosophical questions. This saves them the trouble, anxiety, and self-doubt that come from trying to work out the "best" answer for themselves.


That's a value of philosophy, not just religion. Religion is one type of philosophy, but many people have a non-religious framework to help quickly answer these types of questions.


Frameworks yes, dogma less so.

Dogma is a key part of religion’s power. It gives certainty in the face of these difficult questions.


In principle I agree, but it seems like in practice most religious people form their own philosophical framework like everyone else (mostly influenced by family/community and upbringing) and adjust their religious interpretation to match.

For me it feels like authoritarianism vs. democracy - in theory an authoritarian provides quick certain decisions where a democracy might be slow and messy and unequivocal, but in practice an authoritarian government is just a democracy that makes voting very difficult. If you make decisions enough people disagree with you'll be removed, just like in a democracy. Similarly, religious dogma has to be regularly updated to avoid losing followers, and the enormous variety of religions and denominations and sects and interpretations means that in practice religion is just another form of personal philosophy that is more cumbersome to change when you lose your certainty.


With religion you quickly come into contact with conflicting rules, and then you have to start relying on different peoples’ interpretations of the “rules” so any time saving is just coming from delegation, not the religion itself.


I think that is basically what they were saying.

One does also have the benefit of looking at a religious group and potentially seeing the tradeoffs/upside though. ‘Does this seem to be working out for them overall?’

If one was being rational about the whole thing, anyway.


FWIW, 'omniscient man in the clouds' is a strawman built by its enemies.


Are you saying that Christianity and Islam don’t require the belief in a supernatural being?


“Supernatural being” and “omniscient man in the clouds” are not equivalent which is why it is a strawman caricature


So is the problem is "man" or "in the clouds", because it's certainly not "omniscient". Omniscience and omnipotence are foundational aspects of the Abrahamic religions, and many other religions as well.

It is the very concept of supernatural beings that's risible. That's not a strawman. Inevitably the arguments fall into:

1. My book says so. (Yeah, well, I have a different book that says your book isn't just wrong, but evil.)

2. The god of the gaps. (The ever shrinking god.)

3. So many people believe! (So what? Millions of people think the Earth is flat. That doesn't make them right.)

4. You don't know, so it's just as likely! (lol. Let's unicorn hunting. You don't know they're not real. So they must be!)


Neither of the mentioned religions fixates solely on "believing in a man in the clouds." You're purposefully reducing the spiritual and prose content of these multifaceted, vibrant families of traditions. The person you're replying to isn't suggesting these religions don't believe in omniscient deities, they're trying to discuss them beyond the juvenile, r/atheist flat "le epic sky man" level of discourse.


Actually they literally do fixate on belief. In fact, the primacy of belief repeatedly and explicitly stated in their respective holy texts. And we’re not just talking about belief in some abstract feel good concept when it comes to Christianity and Islam specifically, but rather in the existence of actual physical miracles that happened to real people. The rejection of these purportedly real supernatural events is to reject the very foundation of these religions in particular.

Belief is so central to these religions in particular that there are multiple treatises arguing over the fate of good people that who do not share their belief, and whether mere ignorance (as opposed to conscious rejection) is enough for them to be “saved” from eternal damnation. To deny this, is either displaying ignorance or is rather disingenuous attempt to shift the discussion to abstract platitudes. Furthermore, the omniscience, omnipresence, and omnipotence of their preeminent supernatural being are fundamental properties of it.

Using big words, and acting serious, doesn’t hide the fact that the very topic is unserious. It’s like holding a conference on the economy of the United Federation of Planets. Sure, it may be a fun intellectual exercise, but no matter how much you dress it up, the core concept is fiction. And treating it as serious is not only unproductive, it’s profoundly insulting.

But sure. The Emperor’s new clothes sure are fantastic, and anyone saying they don’t exist are just haters.


"omniscience and omnipotence" The wording unfortunately implies an agentic creature of this world.

Alternatively, God created the world as an arena, with creatures as agents. God manifests in the world through His creatures, in our quest to maximize truth, goodness, beauty. One step further, the purpose of the Son is to provide a humanly comprehensible blueprint on how to act as to maximize truth, goodness, beauty. Finally, the Holy Spirit is that which guides us in following the path the Son laid out for us.

For a scientifically trained mind, this can also be pictured as the quest to identify and asymptotically aim at maximizing the objective function that encodes truth, goodness, beauty. The stakes - a life well lived.


If it’s not an agent, then why bother it? Occam’s Razor would say you can safely ignore it, as something no effect is indistinguishable from something that doesn’t exist. Similarly, if it has effect, but no agency then it’s unworthy of any worship (putting aside just how profoundly offensive the very concept of worshipping is). No one worships gravity, and worshiping the sun is rightfully seen as primitive superstition.

As far as a deistic god , you clearly don’t believe in that, as any reference to divinely given scriptures runs counter that concept. You can’t claim its hands off while simultaneously engaging in miracles and vaguely interpret led prophecies.

It’s a weak, and lousy concept, propagated by cultural baggage.


Shouldn't be "man" either because at least in Christianity he's described as the "Father".


I'll give him the benefit of chalking that up to Bronze Age societies failing to capture the transcendent nature of the sublime due to linguistic and cultural biases.


In practice they never did. Their utility (for good or bad) came from being widespread moral frameworks, shared by an entire society, that you were taught to respect from birth, plus that fact that they weren't up for debate, except through very slow processes.


Just for the sake of debating I wanted to add that the entire basis for not debating those facts is because they belonged to a dangerous being in the clouds. Not saying that they should have been debated more, nor less, just that pretty much all societies based their moral codes on very scary supernatural being(s). Maybe because only fear makes humans collaborate in good times - thus basically turning the "good times" into a permanent "dangerous times"...


>I wanted to add that the entire basis for not debating those facts is because they belonged to a dangerous being in the clouds

The being in the clouds is a red herring. The real reason for not debating those facts was that they were conventional respected wisdom. People didn't have to, and often only superficially if at all, believe to the actuality of it - but they wouldn't contest the associated moral code, the same way they wouldn't contest the customs of their land.


Alternatively, 'traditions are experiments that worked', and 'fear of God' simply translates to 'fear of messing up', with an accent of 'by pridefully refusing to listen to the wisdom of tradition'. Darwinian insights apply to a wide range of human behaviors, from cooking to cheese making to moral systems. Debate is always welcome, but changing social norms willy-nilly on a whim comes with a high risk of chaos, pain and disfunction.


A strawman is an argument used instead of another argument... what is the original argument? Who are the enemies? Who is 'its'?


...it is?


> the practical day-to-day rules to live by gives people answers to these sorts of philosophical questions. This saves them the trouble, anxiety, and self-doubt that come from trying to work out the "best" answer for themselves.

That's part of culture, not necessarily religion. Those answers come from just watching how people go about life and learning from them. Metaphysical beliefs are not needed for day to day life and are just vestigial at this point


> The Catholic church has an explicit teaching on this, which is, "you may not do evil so that good may come from it."

It is worth noting that that same doctrine holds that "doing evil" means doing an act which is evil in itself or willing the bad outcome (e.g., because the harm caused is part of the mechanism by which the good is achieved) of an act which is not itself inherently evil. Knowing a bad outcome is likely (or even certain) to occur but accepting that as a side effect (rather than a mechanism) of doing good without willing the bad outcome is permitted, for proportionate reasons. (Principle of Double Effect)


I appreciate this insight. A lot of the EA stuff really feels like Rationalists treading into philosophical problems where theologians and philosophers have been working for like, centuries, cocksure that everyone in the past has at best little to teach them.


If churches only followed their own preachings :-)


"Jesus, please protect me from your followers"


I think that the problem is even deeper, the idea of an actual concrete "universal expected moral utility" seems so shaky to me. I appreciate utilitarianism, and think it provides important and useful guidance, but the EA approach takes it way too literally. If you're trying to decide whether to fund research of vaccine A or vaccine B, sure the outcome variable is clearly well defined, but if you're talking about the sum total human happiness 100+ years down the line....I mean the error bars on that are so wide any honest error propagation is going to render your conclusions useless. It's the same grift as prosperity gospel stuff: provide tissue thin justification for its rich participants to feel good about the horrible things they do to get rich.


> .I mean the error bars on that are so wide any honest error propagation is going to render your conclusions useless.

Are any of the leading players in the movement scientists of any kind? I have worked with many people who while being obviously very smart, including engineers of many kinds and software developers, who don't have any idea about errors and compounding of errors.

When I studied physics in the mid-70s a compulsory feature of every experimental report was the expected error in the result which had to be justified with reference to the expected errors in all of the measurement equipment and any other sources of error.

When I got into industry this discipline seemed to be pretty much absent.

In my 40 year career in electronic and electrical engineering I don't think I ever saw an engineer include any information on the expected error of a set of measurements even though most were familiar with the concepts of statistical curve fitting and so on. They were not much better than social scientists and would plot the results of an experiment in Excel, click the button to get a polynomial fit and accept the coefficients as gospel totally disregarding the R value. And most of them didn't care that the polynomial form wasn't even applicable to the data.


Don't even get me started on error bar usage in industry. I've seen Big 3 consultants charge several 0s for a regression analysis that I would give a C+ in a first year grad stats course. I can't imagine how any of EA's guiding lights could have used stats in a practical application and still want to hold it as the foundation of their entire moral outlook.


Like the assumption that maximizing something is "good", even QALYs...


I'm sure this will be analyzed a lot from many legitimate different perspectives, but IMO the root fallacy here is the fetish for quantification. Sam said he would take a bet with a 50-50 chance of ending the world or making it "twice as good". The meaning of ending the world is obvious, but does it mean to make the world twice as good?


The message I got from his answer to that question was that he didn't see an ethical problem with the idea the he has the right to put everyone else at extreme risk.


Its also just bad understanding of probability. Making bets to end the world isn't a one time deal, its an obvious repeated game. You don't just need to win one 50-50 bet, humanity needs to win all of them which approaches 0 probability.


The focus on "rational" value maximization on individual bets is doubly blind: - First, you and I might have different utility functions which are based on valuing outcomes differently; if utility functions are different, there may not be a single "most rational" choice - Second, a single bet is a single bet, but investing in a company or following a specific policy is more like staking a gambler to make a sequence of bets.


Was there never any serious discussion about preferences shifting as they age?

I had always assumed there was, and the issues were addressed, for there to be so much confidence in that 'plan'.


> What are the odds of a 50 year old having the same moral framework that they did when planning their future at 20?

I'm older than 50 now, and I have certainly changed from when I was 20. I'm much more inclined (and able) to engage in philanthropic activities now than then.


The original idea of EA was to convince students to take "immoral" but good paying jobs on Wallstreet in order to donate the earned money to charities (Mainly charities owned by other effective altruists).

EA is thus an excuse / incitation to commit immoral and in SBF's and his co-conspirators case illegal things.


> The original idea of EA was to convince students to take "immoral" but good paying jobs on Wallstreet

No, this isn't the original idea. It's one that came up when people started trying to combine utilitarianism and arithmetic with the actual original idea: to ask the question "How can I do the most good in the world?"


I have only ever observed EA as completely dominated by hardcore utilitarian philosophy, so it's interesting to read the statement that EA originally was not utilitarian. Do you have any information about what EA was like in that early period?


TFA has a brief description.

Note that I said "utilitarianism and arithmetic". It is one thing to say "saving 2M people from disease is better than saving 200 people from death". It is something quite different to say "it is X times better".


Now tell us "true EA has never been attempted"


Not at all. My impression is that it was attempted, didn't last very long, rapidly got corrupted by arithmetical utilitarianism, perhaps inevitably.

Also, I'd probably say "simple EA", since I have no idea what "true EA" would even mean.


This just isn't true. Earn to give is not in any way the original idea of EA.


Yes, it's a way of excusing lifestyles of luxury via immoral causes.

The modern equivalent of paying indulgences to the Catholic Church; the wealthly have always sought ways of justifying the grotesqueries of their lifestyles. Indeed for many, it's the primary labor of their lives.


> equivalent of paying indulgences

Only approximately. Indulgences were used as a penance which was only possible on the precondition of true contrition. There is no such contrition in the philosophy of EA.


I don't know much about EA other then it sounds like a way for the rich to justify their existence. Indulgences however they may have started, became grossly twisted and simplified until it was giving money to the church in exchange for absolving sins.


Just like most good ideas, it becomes corrupted by human greed.


Karl Marx: There is this thing in modernity called “alienation”...

EA: How wonderful! It shall be our moral centerpiece.


For anyone interested in charity or doing good, and this is your first time hearing about EA, i recommend you check out some of the major resources at https://www.givingwhatwecan.org/ or https://80000hours.org/. You have an opportunity in your life to alleviate massive suffering and change lives, you shouldn't let negative publicity sway you from doing good things.

EA is all about doing the most good you can, and thinking about how to do it, instead of just guessing or following fleeting emotional moments.


Yes, but you have to recognize that it matters that this movement traced this path to being almost entirely associated with a scammer and an obsession with artificial intelligence, blowing right past its (IMO) more grounded and useful focus on improving conditions for presently-existing life.

What I mean is: you can't simply ignore that outcome when evaluating the principles, because people followed those principles to this (in my opinion) bad outcome.


I wouldn't say that the movement was entirely associated with him at all! He donated to a lot of EA organizations, sure, but Bernie Madoff donated to a lot of Jewish charities yet that doesn't cause Judaism to be associated with scam artists.

Now the complaint that EA influenced SBF to do the bad things that he did, that's a much more valid complaint about the principles. But i think that's just a coincidental association, rather then a causal one. Nowhere does EA suggest doing short term bad things in exchange for long term good things. It suggests "earning to give", but that's not at all why SBF is considered a bad person. He stole money and committed fraud, plain and simple. EA has advocated against that in the past, and continues to.

The most you can say about EA imo is that it's concern about alleviating large scale suffering allowed SBF to internally justify doing bad things in exchange for doing a lot of future good. And sure, if your message is corruptible easily that's a problem with your message. But there is literally always going to be people mis-interpreting your message, or worse yet people intentionally using your message as cover while knowing full well they are disobeying it. That happens with social justice movements, leftist movement, rightist movements, literally any message ever has had this problem. It's a problem that should have lessons learned from it, but if it invalidated the movement i don't think there is a single valid movement left on earth.


> I wouldn't say that the movement was entirely associated with him at all!

Unfortunately, this is a bubble perspective. I (and I assume you) have known people who have been interested in or involved with or have been talking about effective altruism for many years. This is probably true of most people here on HN and in our broader techie bubbles.

But for every person like us who first heard of effective altruism years ago, there are now a couple orders of magnitude more people who have only heard of it within the past year, entirely because of SBF's rise and fall.

It's a shame! But that's just how it works when some niche thing becomes mainstream via some famous person. Sometimes that publicity is good for that niche thing, but it is fundamentally a risky proposition, because famous people are fallible and their reputations are inevitably tied into the things associated with them.


Fair point. Although just in terms of PR, that is at least fixable. But yes, you're absolutely right about the bubble.


Frankly, I very much doubt it is fixable. But perhaps it is re-brandable. Few enough people know what effective altruism actually is, that if people start talking about the same thing under a different name, not all that many people will make the connection that it's the same thing they have this hazy negative association with.


I think there's a strand of EA that goes hardcore utilitarian, because utilitarianism is the only thing that is both logical and universally fair. And that specific strand of EA is IMO very very vulnerable to ends-justify-means kind of thinking. SBF doesn't even have to be a fake EAer to have done what he did. He just has to be a hard-utilitarian EAer who thinks there's a way to rob the world and repurpose the money to the most likely path to infinite utility.


Agreed, utilitarianism definitely causes people to suffer from "i am smarter than everyone else"-itis (and i say that as a hardcore utilitarian). The good news from this is that hopefully this make other utilitarians enter the odds of failure, and that they are less likely to succeeded then they think, into their calculus. The only difference between a utilitarian that creates a positive impact (Bill Gates for example, even though i don't think he's utilitarian) and SBF is that SBF was overconfident and suffered from delusions of grandeur.


Possibly Bill Gates' focus on solving a massive problem, without utilitarian whataboutism, is what makes him genuinely effective.


The utilitarianism "whataboutism" is what allows most people to see and be motivated to solve (real) massive problems. Most people don't care about solving world hunger, it's only if you really understand the depth and suffering of that problem do you know how badly it has to be solved. You either get lucky enough to have your emotions line up with it, or you figure it out via utility. Plus, emotions can lead you into solving fake massive problems just as easily as real. Mormons are certainly highly motivated into solving the problems of a lack of God in our society.


> You either get lucky enough to have your emotions line up with it, or you figure it out via utility

Emotions are what enable us to decide what is the most utility. It's accounting, not finance.


One question is : how exactly did SBF come to his... "interesting" interpretation of the expected value of money ?

And the vocabulary and the whole framework that he has been using to think about these issues come with their own values and consequences, AFAIK widely shared inside EA ?


"this movement traced this path to being almost entirely associated with a scammer"

You have a wildly distorted view of effective altruism. Peter Singer's book, The Life You Can Save, is nearly 15 years old. It was updated (more current statistics) about four years ago. The leading thought experiment presented in the book (the child drowning in a pond) was written by Singer over 50 years go.

The label "effective altruism" might be new (2011) but the ideas behind it have been around for a long time. There are terrible people who give money to many causes -- do you think all those causes are disreputable because of them?


It seems like a lot of people replying to me are somehow misunderstanding what I said.

In that quote I said that effective altruism is now "almost entirely associated with a scammer", I did not say "I almost entirely associate effective altruism with a scammer".

I read Singer's book around the time it was written! I have long been, and remain, a strong supporter of GiveWell's approach to effective charitable giving.

But you have to understand: The vast vast majority of people have never heard of these things, but have now heard of "effective altruism", because that phrase appeared in an article about the salacious acts of a criminal scam artist.

Yes, that's very frustrating! But what I'm saying is that people who consider themselves part of this movement could use to reflect on how it ended up in this position.


Also, there seems to be a lot of conflation between effective altruism (the principle) and Effective Altruism (the movement). The movement seems rotten to me and perhaps dangerous. The principle seems solid.


Yeah this seems right to me.


This is like associating Christian charity with the crusades or Spanish inquisition.


Christianity (and all religions) absolutely do have this same problem! Jesus' philosophy is (IMO) quite good, but it doesn't make sense to ignore the thousands of years of what has been done in the name of the religion when evaluating it. Both things matter. A nuanced view of a movement includes both evaluating its principles, and considering what people have done with those principles as the movement has met reality and evolved over time.


May I suggest Mere Christianity by C. S. Lewis or The Everlasting Man by G. K. Chesterton?


Well, I think it's a step too far to evaluate principles based on what people do in your name. If I do something in your name, that shouldn't bring into question your principles. And if it does, that's people not thinking clearly.


I think this is just "no true scotsman", right?

I'm just saying: It's not irrelevant to consider what people who profess to be living by the principles of a movement have actually done in reality. It's fine to say "well, by my understanding of the principles, those people were not actually doing it right". (That's certainly how I feel about a bunch of stuff that has been done in the name of Christianity!) But that doesn't make their behavior irrelevant.


Point I'm making is they may well not being living by those principles. I could say Atkins doesn't work because I do Atkins and its claims don't live up, but if I'm not actually doing that then that has zero reflection on the diet, and purely reflects on me.


No, that does reflect on the diet. And, again, this is what "no true scotsman" is all about. Or, if you like, it's the same as "communism has never been tried". If "the Atkins diet works" but it doesn't work for most people because "they just aren't doing it right", then no, it doesn't work; if it did, people would do it right, and it would work.

The actual aggregate outcome of ideas / principles / movements matters, in addition to their intentions.

For what it's worth, I think the major religions actually hold up fairly well to this level of scrutiny, despite all of them having major obvious failures in the particular, and that that's a big contributor to their longevity. For instance (just because it happens to be Christianity that I know the most about), it is notable that when Martin Luther put up his demands for Church reform, he was rejecting very specific behavior that had become ingrained in practice, rather than the foundational principles of the religion. Despite being essentially a radical reformer, he thought the principles were nonetheless good.

Maybe effective altruism will also have staying power, because maybe its principles will also stand the test of time, and its scandals fade. But I think people who care about it shouldn't just rest on their laurels and assume that will happen! I think people should be aware that if most people are "doing it wrong" then they aren't actually doing it wrong, it just is what it is.


> if it did, people would do it right, and it would work

No, that's a non sequitur. People aren't machines following rules. It's hard to stick to principles. Saying "if love thy neighbour worked, people would do it right, and it would work" is just not right. It would work if we did it, but we don't do it.


It isn't a non sequitur! The circularity you're responding to in what I said is implicit in all these "no true scotsman" issues.

Maybe I'm not being explicit enough: It is not effective to just have principles, and then when it turns out that nobody is following them, keep saying "well if people would just follow the principles, it would work!". What is effective is to develop practices that reinforce the principles.

So Christianity didn't just write down "love thy neighbor" as a principle, it developed a practice - going to church - to continuously reinforce that principle, and in practice that has worked out very imperfectly but also much better than most things (like the Atkins diet!). And indeed, I think a big part of the problem with contemporary political Christianity in the US / Christian nationalism, is that it doesn't emphasize this kind of praxis, but rather mere identity and sloganeering. And that doesn't work!


> blowing right past its (IMO) more grounded and useful focus on improving conditions for presently-existing life

Longtermism is one strand of the movement - still many millions of dollars get donated to global health and animal welfare.


Yes, but to me it seems like that strand has sucked up most of the air in recent years, and (again, IMO) made the movement significantly less, well, effective.


if you choose to characterize every social movement by it's worst scandal, you're not going to have much of value left.


For what it's worth, I think two totally separate things:

1. It doesn't really matter how I choose to characterize this, what matters is that this social movement only entered mainstream consciousness because of the scandal. And that will be fatal to its ability to ever gain a mainstream foothold. If it had already broken through into mainstream consciousness, and then there was a scandal, then maybe things would be different. But as it is, for 99% of people, this will now just be some scammy thing that a famous criminal was into.

2. But for me personally, prior to any of this scandal, I had lost patience with EA. I was a big fan of the version of it that was like "people should mostly donate to charities that are not wasteful and efficiently use their funds on things with measurable positive impacts to peoples' lives". But then it has seemed like for the last few years, they got bored with that and started advocating that people instead throw all their money into - in my view, even more wasteful - initiatives speculatively aimed at improving the lives of humanity (or our successors) thousands of years from now.


1 is totally fair. I do worry about that, especially as far as backlash on givewell happens because of this.

In many respects I think longtermism is a thing that really doesn't/shouldn't scale to have everyone's attention, because so often throwing more money or non-niche skills at it is frankly worse than useless because it attracts grifters. AI safety isn't really something most people outside of the research frontier should really give much attention, let alone money.


Yep, agreed.

But the sexier thing will inevitably suck up the attention of people who see themselves as the brilliant movers and shakers of the world.


> EA is all about doing the most good you can, and thinking about how to do it, instead of just guessing or following fleeting emotional moments.

The original concept is, but I deeply question the movement itself. Fortunately, you don't have to join the EA movement in order to be thoughtful about how to help your fellow man.


Yeah, at its core EA is about using data and logic to try to maximize the good that one can do through giving or taking action. Sure, some people have gone of the rails with AI doomerism and other far flung ideas, but its hard to really criticize either the intention or the strategy of trying to be as effective as possible when attempting to improve the situation for the planet and/or people in need.


I think the QALY criticism grazes the real fundamental problem with EA. You can't really quantify "good" good enough to create a meaningful measure. Being alive is better than being dead (in almost all cases). Being sighted is better than being blind. But is losing an arm objectively worse than losing a leg? Who has it worse, a gay man in a seriously homophobic country or a woman in a severely misogynistic country? Sure, donating money to an animal you find cute may not be the best use of money, objectively. But maybe it's better to be openly subjective, lest we end up codifying our own subjective beliefs.


This is missing the forest for the trees. When I donate to http://givewell.org , I don't care about the relative value they place on arms vs legs. What I care about is avoiding pseudo-charities. You know, the ones that pop up in exposé articles every now and then where most of the money goes to marketing and admin, a tiny sliver goes to the actual cause, and then a clown car pulls up and unloads 30 people who try to rationalize how this is actually a good thing? I want to avoid those. As long as givewell can somewhat reliably steer my money away from pseudo-charities, as far as I am concerned they are earning their 10%.


If all "effective altruism" ever meant was "avoid giving money to inefficient charities, and we'll help you figure out which ones are which", then it would be an unambiguously positive thing. But, from my perspective, it's like most of the people originally involved in the movement got bored with that, and went all in on all this other stuff, justified by philosophical shenanigans, resulting in not just different but often contradictory conclusions.

For instance, lots of wealthy techie types who bought into EA donated heavily (and I think still do) to charities targeting things like risks to far-future humans and life extension. To me, those are also "pseudo-charities". There is no way to evaluate their effectiveness. They could be 100% effective or 0% effective, we won't know for generations or millenia...

And - just like all the "pseudo-charities" researched by GiveWell - every dollar that has gone to these organizations with unknowable effectiveness could have instead gone to known-effective charities.


Rich people have been using charity to avoid taxes, rationalize power acquisition, solicit donations for dubious causes, promote blatantly self-serving policy, etc since forever. We agree on classifying those as pseudo-charities, we agree that these things happened under the EA banner, and we agree that this should trigger some self-reflection from anyone who enabled them and rebranding from anyone who didn't.

However, I'm still going to do my charity through GiveWell. GiveWell makes many people with dubious pet causes very angry and these people have taken the adjacent scandal as an opportunity to attack the very idea of vetting charity. If someone wants to propose an alternative vetting strategy, I'm all ears, but I don't see the sins of SBF & co as a reason to stop vetting. I just don't. I am of the opinion that we need vetting now more than ever, and whatever we want to call that going forward I plan to support it.


100% agree. I just think of GiveWell as a totally different thing than "effective altruism" now, even though it is where I first heard of the concept. (And that predates the SBF scandal by quite a while!)

To their credit, they never recommended these "longtermism" initiatives as effective places to direct charitable funds. So I think their credibility as an organization comes out of this intact, despite some of the people originally associated with them coming out looking much worse.

But I dunno, maybe they and/or people like us who support them should be doing more to explicitly distance them from the rest of the EA movement.


> I don't see the sins of SBF & co as a reason to stop vetting.

Who's arguing against vetting?


> earning their 10%

Note that GiveWell only gets 10% of your donation if you explicitly request that they do. https://www.givewell.org/donate/more-information#How_is_Give...


Right. I happily tick the box because I believe in the importance of what they do.


> You know, the ones that pop up in exposé articles every now and then where most of the money goes to marketing and admin

Yeah, there are even ones that buy manor houses [0].

[0] Effectivealtruism.org purchased a £15M estate for its headquarters in 2021. https://news.ycombinator.com/item?id=33903850


From https://www.astralcodexten.com/p/my-left-kidney - "conference venues kept ripping them off, having a nice venue of their own would be cheaper in the long run, and after looking at many options, the “castle” was the cheapest".

You're anchoring on 'castle'. If it was a boring office building, no-one would care.


That quote you use was from a comment on that post, and the full context of that comment is arguing the complete opposite of what you are saying here:

> You said:

> "Obviously this kind of thing is why everyone hates effective altruists. People got so mad at some British EAs who used donor money to “buy a castle”. I read the Brits’ arguments: they’d been running lots of conferences with policy-makers, researchers, etc; those conferences have gone really well and produced some of the systemic change everyone keeps wanting. But conference venues kept ripping them off, having a nice venue of their own would be cheaper in the long run, and after looking at many options, the “castle” was the cheapest. Their math checked out, and I believe them when they say this was the most effective use for that money. "

> I'm curious to see the math for Wytham Abbey. All I ever saw was Owen Cotton Barrett's response when this was brought up over on the forum. The actual cost benefit analysis for paying £15 million wasn't made clear. As one comment pointed out, the reasoning given for purchasing the castle "could have been made virtually verbatim had the cost been £1.5m or £150m."

> You might have seen more information than me though.

> Owen Cotton-Barretts comment -> https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...

I think taking this statement ("we were getting ripped off") without any data goes exactly against the while idea of effective altruism.


(A nit, but the quote was from the body of the post, not a comment on the post.)

A decent subthread on this on the EA Forum: https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...

The money came from a grant from OpenPhil for the purpose. EVF didn't publish a cost-benefit analysis my knowledge, but a back-of-the-envelope calculation [https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...] suggests it would plausibly cost £1~2M a year to hire similar venues. That said, whether it's worth it to purchase a dedicated venue depends a lot on how much use the venue actually gets.

Ultimately, "we were getting ripped off" is a financial judgement made by CEA. I don't think it really has a great deal to do with the wider effective altruist movement tbh.


I think there is no shortage of objective ways to do good. If we skip over those, seeking the more subjective ways, it is probably in service to one's self, not to the goal of doing good.


The problem with EA isn't a focus on objective METHODS to do good, but with the belief that there are potential recipients of "good" that are objectively better targets than others, and the belief that finding objectively better targets is even achievable.


EA does not believe there are objectively better recipients, only objectively more cost effective. It's not about moralizing at all. You can choose whatever objective function you want, there are some ways to achieve that objective that are more cost effective.


Does EA not believe in objectively better recipients?

    MacAskill writes that QALYs can be used to decide which charitable causes to prioritise: faced with a choice between spending $10,000 to save a 20-year-old from blindness or the same amount on antiretroviral therapy for a 30-year-old with Aids – a treatment that will improve their life and extend it by ten years – MacAskill argues it would be better to perform the sight-saving surgery, as the 20-year-old can expect to live another 50 years. He acknowledges that QALYs are an “imperfect”, “contested” measure but sees them as mostly good enough.


MacAskill does, but he's doing applied ethics here. EA is a normative ethical framework. You are free to disagree with his objective function and still be EA as long as you are thinking about how to maximize your own.


So it's just as good to give 1,000 to someone with a 6 digit salary as to an African peasant?


No. In utilitarian EA terms it’s 100% the opposite, you should give it to the peasant because they’ll get a lot more utility out of it.


that was entirely my point. it's extremely hard to beat giving to the global poor.


They bought two effective altruist castles because they said the ambiance would help their thinkers and spruce up their conferences.


Citation needed. From what i know, https://www.astralcodexten.com/p/my-left-kidney says "conference venues kept ripping them off, having a nice venue of their own would be cheaper in the long run, and after looking at many options, the “castle” was the cheapest". Zero mention of ambiance and improving thinking.


This might be the explanation astralcodexten was discussing [1]. Geoffrey Miller, a commenter, (I'm not involved in EA enough to know if that's a household name) says this

    I've been to about 70 conferences in my academic career, and I'm noticed that the aesthetics, antiquity, and uniqueness of the venue can have a significant effect on the seriousness with which people take ideas and conversations, and the creativity of their thinking. And, of course, it's much easier to attract great talent and busy thinkers to attractive venues. Moreover, I suspect that meeting in a building constructed in 1480 might help promote long-termism and multi-century thinking.
Of course this man didn't buy the castle, so I don't think it's fair to hold a random commenter's thoughts against CEA. I'm not big on cognitive decoupling, but I also find buying a castle for a charity event venue shocking, and I don't think there were enough numbers in this post to soothe me. If renting event venues was too expensive, couldn't they have met virtually?

[1] https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...


Thank you for finding the original source, that's legitimately very helpful! I think this message can stand for itself much more then the parent can. You can argue with it and disagree, but at least the disagreement is more reasonable then they did it only for the ambiance.

Although it doesn't seem to mention the issue with being unable to find other conference centers, so that reason may still be valid or might be a mistake on Scott's part.


I suspect the real explanation has way more to do with Yudkowski's Harry Potter fanfiction and seeing the rest of the world as "Muggles" than they would be comfortable explaining.


It's right here: "having a nice venue"

Why do they need a "nice venue"? Why wouldn't any airport hotel's conference room be sufficient? They wanted something nice, more than utilitarian. They wanted luxury, and how does an "effective altruist" defend that? Because luxury makes them "more effective" of course.


I follow a business rule of thumb that relates to this. The "chandelier rule": the bigger the chandelier in the room, the worse the deal being pitched to you is.

If someone is spending lavishly on pitching to me[1], I start to really question the deal. At best, they're showing me that they don't mind wasting money on things that don't matter. If you want to impress me, wine and dine me by taking me to McDonald's.

[1] I have been picked up in expensive sports cars, I have been taken out to eye-wateringly expensive restaurants, etc. This is the sort of thing that raises big red flags to me.


TFA quotes an "EA forum", and some Googling leads to https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/...


your attitude to citations seems strange. Scott Anderson simply says, as part of a rant against anti-EA people, that the EA people said that they bought a castle for the right reasons, and that he chooses to believe them.


As a side note, Astralcodexten blog by Scott Alexander is a fascinating read even if you are neither effective nor altruist.


Regardless of why they bought the castle, or whether it would be more accurate to call it a manorial estate, it's a long way from a simple website that sends out mosquito nets and deworming tablets.


> Regardless of why they bought the castle

Two separate castles:

https://twitter.com/paulmainwood/status/1600433194691502081

https://www.chateau-hostacov.cz/en/contact

They really ran with the exact opposite of Peter Singer's sacrifice your fancy boots to save a child parable.

And Effective Altruist Sam Bankman-Fried, who drove a Prius for optics, lived in what has been cited as potentially the most expensive private residence in the Bahamas. He was grilled on it a bit before the collapse and explained that worrying about what he did with 1% of his money wasn't justifiable use of time in the big picture--still dubious given the Singer origins of the movement)--but a bigger problem was close to 100% of the money was fictional: illiquid self-issued token valuation, and double spending/self-loaning of customer funds.

Just like other charities that spend nearly more of the money on fundraising and growth that were the main target of criticism of the movement, they did pretty much the same, while pitching it as a more effective alternative to Oxfam, etc.


The second one seems to be a link to a hotel. Where's the source for the claim that it was bought by EAs?



There is a need to decouple "Effective Altruism (TM)" the culty organization, from the underlying ideas of effective altruism.

Personally, I'd describe the core as donate regularly, and donate with your brain not your heart - consider the ROI, and reflect on how personal income is otherwise being spent to decide a recurring donation amount.

At the same time, imo there's no need to go 100% utilitarian and overoptimize, overthink, or blindly trust someone else's QlYs calculations... Perfection is the enemy of good enough.

I read one of the EA books, did my own research over a few days to pick my causes, and now just do a yearly review after every tax season. I'm happy with it. Never even read a tweet from William or the fancy organization.

Some examples lines of thought

* kids with cancer feels very bad/sad, but still maybe rethink donating to the organization with $7B in the bank and $2B in yearly income that just saves the ~8,000 easiest cases (St Jude)...

* perhaps consider non-charitable donations to political organizations, given the impact of government policies.

* When donating to research, consider the time discount and uncertainty. The research might save more in the future if it pans out, but it might not and you might be dead before any gains. Some people can be saved today.


> And perhaps consider non-charitable donations to political organizations, given the impact of government policies.

This part gets wayyyyy underplayed in EA [0]. So many charities and nonprofits exist to resolve problems that are the result of policy decisions, and ceding that ground by focusing entirely on which to donate too is a losing battle from the start. For example, people being unable to afford life-saving medical care shouldn't be something that religious-affiliate groups even need to exist to resolve, it's a problem with how we've structured our economy overall.

[0] edit: or not, see comment below!


It really does not. A tooon of EA discussion is about political activism, and 80,000 hours even recommends going into politics as a major high-impact career.


To say nothing of all the policy/civil servant EAs out there!


Interesting, thanks for the correction!


People also significantly overestimate the amount of money required to make an impact. If it’s not a contentious issue like abortion, a little money to send some advocates (not expensive lobbyists but nonprofit volunteers) to DC or state capitols several times to regularly press the flesh with representatives goes a long way.

People have just gotten too cynical thanks to the culture wars and partisanship.


I think a big blocker here is that it's takes a lot of time and interpersonal connection to get people on board with a political plan. EA types want to be able to spend/design/research your way to a solution (I derogatively think of this as "Magic the Gathering politics"), but sometimes all that's required is to knock on 20 neighbors' doors and let them know about an upcoming vote they might be interested in, and having a strong enough relationship with them that they might listen. Someone like SBF does not seem like the type to have an authentic, positive connection with the people living in physical space near him.


>Someone like SBF does not seem like the type to have an authentic, positive connection with the people living in physical space near him

Conversely, if you strip away all the FTX crypto hate and eat the rich sentiment, I actually find SBF quite charming and persuasive. I loved the interview he did with Matt lavine, and thought he was a clear and effective communicator.

He probably could have gone far in politics if he didn't go into finance and if he wasnt ugly.


> I actually find SBF quite charming and persuasive.

I genuinely find that fascinating, because from the first I ever saw or heard him (before the FTX stuff erupted), my reaction was the exact opposite. My most charitable reaction was that he was a scammer.


Interesting. Did you see or hear him directly, or through third party reporting?

Do you have any priors that would lead you to that reaction, or were you neutral and open to what he was saying? I know a lot of people are hostile to crypto, so that I should see them reacting inherently negatively to anyone in the crypto space.


I watched him in an interview. I don't remember where, exactly. It was a while ago.

As near as I can tell, I had no prejudgement about him, other than my assumption was that he was a subject matter expert. My reaction to him was entirely about how he acted and spoke during the interview. It had nothing to do with the subject matter being discussed. His manner strongly resembles a personality type that I've encountered numerous times over the decades, and in most cases those people were extremely sketchy.

I think it's that he speaks and acts in a way that appears disingenuous and calculated to manipulate.


yeah, it could simply have to do with the topic of the interview as well. My first exposure was the Matt Levine, which I thought was really candid and basically laid all the card on the table, to the extent that Matt said at one point "I think of myself as like a fairly cynical person. And that was so much more cynical than how I would've described farming. You're just like, well, I'm in the Ponzi business and it's pretty good."

I didnt think he was being deceptive at all.


> thought he was a clear and effective communicator

He would not have come off that way if he hadn't severed the connection between what he was saying and the truth.

Any dinner theater actor can give a good speech, it's giving a good speech without ignoring the messiness of reality that's difficult.

If he had been a clear and effective communicator while describing using customer funds to buy houses and pay debts then he might have become president.


I don't think he said anything dishonest in the interview I mentioned. In fact he was widely criticized for the clear description of yeild farming he gave


> thought he was a clear and effective communicator

conmen often are very effective communicators. It comes with the package.


Interesting point. I guess on one level I feel like giving a good interview isn't really indicative of someone's actual character, the kind of political movement building I'm thinking of is a somewhat different skillset than just "presents well on screen" (though they can overlap!). With that said, presenting well on screen does help when you're running for office so he maybe could have gone far with that as you say. However as to whether he would have actually been a good and prudent public servant once in office seems unlikely to me, given his financial misdeeds and the narcissism that seems to be coming through all the post-hoc analysis.



I think you can question the efficiency and ethics of a system where wealth accumulation and redistribution are largely controlled by individuals or private entities, instead of being managed in a more collective or systematic way that might address social issues more directly or equitably.

Should wealthy individuals that have often accumulated wealth at the detriment of others be the power that decides which causes are worthy of support, based on their personal preferences?

It seems it could be better to develop a system where resources might be distributed in a more equitable or communal manner from the outset, reducing the need for later redistribution through individualized philanthropy and ideologies like effective altruism.


> wealth accumulation and redistribution are largely controlled by individuals or private entities

Private entities don't "control redistribution" - they spend money. "Redistribution" implies a parental "you all get the same pocket money" approach, instead of "I'll spend money on things I value, and receive it based on what others value".

There is no "collective way" - even with a government, it's relatively few people (voted in, not having proven their effectiveness by selling something lots of people value) making the decisions.

Charities, where people choose to help others, are a great option. There's no need to reduce the need for them. They also aren't "redistributing".

Having said all that, the government takes almost half your cash before you get it, and taxes you when you spend it as well, and taxes imports you buy as well, and taxes the fuel needed to move goods around, and the businesses you buy from in multiple ways, and if you're paid a salary your business also pays taxes on you as you're an employee. This is a pretty "we'll take your stuff" system already. The answer may not be to give them more stuff.


My point isn't to advocate for increased taxes, but to address how we initially distribute wealth to mitigate inequality from the start. This might include revising corporate practices, improving wages, and enhancing public services, leading to a society where wealth does not concentrate so heavily in the first place.

Democratic systems do provide a communal approach to resource allocation, which tends to be more accountable than private philanthropy. It's worth pondering whether funneling donations to government budgets, which are subject to public scrutiny and democratic processes, might actually achieve greater societal benefit.

However, the crux of my argument is to create conditions where neither heavy taxation nor philanthropy are as necessary. Consider environments with competitive markets, strong worker protections, and cultural stances against excessive wealth accumulation. These tend to foster a robust middle class, minimizing the need for later redistribution.


> The answer may not be to give them more stuff.

Is the answer to let people like Musk and Gates get so much stuff that they can pretty much do whatever they want? No one elected them. If you are incredibly successful does that mean you get to make more money than you would ever need and use it to do whatever you want? At what point are collective needs more important than the freedom of the individual to keep making more money?

These are all questions that are waved away by nature of 'they earned it'. But so what? They are particularly good at one specific thing (making money, or being lucky, or both) -- does that mean they deserve the power that one gets with having effectively infinite capital in a capitalist society?

I honestly don't know, but I heavily lean towards 'no'.


Rich people don't automatically have power. Sam Bankman-Freid is about to go to jail. If we lived in the sort of world your instincts seem to have been moulded to believe we do, that wouldn't happen. Anyone can go to jail, no matter how much money they have. Power is in the state, not in money.

Now, you can buy powerful people such as politicians, because the democratic process is not very good at being able to detect people who shouldn't have power. But that is not the same as power.

Private enterprise, however, is generally pretty good at giving people the right amount of responsibility and reward for value created.

The big, big problem occurs when you mix power and money. That's why tens of millions died in the last century. Mao and Stalin destroyed their own countries for the greater good, put and kept in power by people who thought that for government officials they could turn off their critical faculties; their instincts somehow trained to believe that government was like a good parent who will handle the family finances well and love its children. Sadly for them, government is populated by flawed people, just like everything else. To limit the blast radius of this, successful societies generally say: government has power; it shouldn't have control over the allocation of money. Venezuela has been experiencing the horror of the state taking over its main industry, its population starving.

In this case it's definitely true that perfect (or a utopian vision sold by a skilled orator) is the enemy of good.


> Rich people don't automatically have power.

Yes, they do. In the US, anyway, money is power and the more of it you have, the more power you have. That rich people don't always get their way isn't evidence contrary to that. Not all powerful people are pulling in the same direction.

> Private enterprise, however, is generally pretty good at giving people the right amount of responsibility

I am very unconvinced of this, but it may depend on what you mean by "the right amount of responsibility".


Bankman Fried is in jail because he is a megalomaniac who thought he could talk his way out of it. He could be in Dubai right now with the other rich crypto scammers who are wanted in other countries but he decided to go back to the USA.

He also lost all his money, so he isn't a billionaire. I would love if you could list some actual billionaires who ever went to prison in the USA.


This is one of those yes and kind of points. Talking about The Revolution doesn't put food on people's plates.


I think this is exactly what the GP means when they discuss donating to political causes. If you can donate your $100 to buy a week's worth of food for a family in need, or donate it to a political campaign that credibly promises to end hunger in your country, it's not immediately obvious which is more effective (but the latter tends to be discounted by EA types, especially early in the history of the movement).

Perhaps the most effective altruism of all is that which makes Effective Altruism (TM) redundant.


Something like, from each according to his ability, to each according to his needs?


I like to term my database setups as "to each according to his needs"-"from each according to his ability" replication.


So, communism? Or something else along those same lines? This is not a new argument.

> more equitable or communal manner

Who, exactly, are we empowering to make that decision?


> So, communism

No, that would be another instance of a similar pattern of "pool all the money and put it in the hand of a select few that gets to decide if, where and how to redistribute it".

I'm talking of a system that wouldn't first accumulate maximum wealth, and then have a select few decide where to distribute it.

I'm not sure what system can best achieve this, but it does feel like it be better to have a system that's designed to make it so money is distributed more evenly from the outset, and where distribution decisions are done more communally.


>At the same time, imo there's no need to go 100% utilitarian and over optimize.

Exactly. Paraphrasing Economist Mike Munger when interviewing EA advocates: "I'm very interested in the scientific rigor of EA, but I'm still going to buy my children birthday presents"


This doesn't even look like a failure to fully optimize on a utilitarian basis when you begin to account for the psychology damage done to persons when their expectations, desires, and happiness are sacrificed for non-proximal greater goods.

People you love need your love. If you don't give it to them -- and sometimes the means of expressing affection is material, like gifts -- then they suffer. And if they suffer out of principle and that principle is universally applied, then you've created a whole lot of tragic, self-perpetuating suffering. That does the world no good and it's arguably worse than passively allowing for the death of someone or many someones that you'll never meet. The reason being is that if you perpetuate suffering in your children or your loved ones, they have a good chance of become either monsters or crippled. Either way, they will be poor custodians of the future.


Indeed. I think it is valuable to think deeply about efficacy for what you have decided to contribute towards non-proximal goods. It is probably also worthwhile to think about contributing more to non-proximal goods.

The part that fails is the utilitarian philosophy itself. Most people dont actually believe utility or happiness to fungible in practice. Similarly, most people find the natural conclusions that arise from utilitarianism to be repugnant.


Despite SBF, I still am a proponent of effective altruism overall. You just have to be obsessed performing your own due diligence because there isn't aren't any market forces that compel you to do so. For example, I think GiveWell is good example of this.

On the other hand, the analysis that spending money on AI risk has functionally infinite value is obviously absurd. Imagine pitching to investors that your AGI startup will have a functionally infinite market cap when humans become an intergalactic species.


Frankly, any moral or philosophical stance that is predicated on "imagining pitching to investors" makes me sick to my stomach.


In this context, the "pitching" would be done by government officials and charities, and the "investors" would be regular voters and donors.


Have we just invented charitable giving and democracies?


The problem is that people are giving and voting based on their feelings rather than trying to understand the impacts of these things. IMO, the reason why corporations win over the people is that they actually bother to do this due diligence. I'm sure that Pfizer made sure that their lobbyists were doing a good job ensuring that the CARE act would favorable to them. On the other hand, housing first advocates have been cheering for wasteful projects like $1.7 million public toilets until tabloids started pointing out how ridiculous it was.


I think this is a generalisation, but I broadly agree. I don't think anyone has solved this problem, though. You can get analytical people to do this calculus, but they would have always done that. You can't get everyone to do it. The only effective solution to this is a cult of personality, where people get a brand around doing good things, and everyone flings money their way.

Like anything large-scale that doesn't have measurable outcomes, this is highly fakeable.


I agree with you, except that GiveWell is clearly not an example of obsessively performing your own due diligence! It's an example of the very high value of having institutions that are highly trustworthy specialists in doing a specialized kind of due diligence, so that people don't need to do it themselves.

This is why it's critical to have trusted institutions doing what institutions like the FDA and CDC, and incredibly bad for they themselves or for others to undermine their credibility.


Givewell is incredibly transparent. Imo the proper way to use givewell is to first decide on your definition of good and then read their reports on the different charities. If you just want to minimize death you can use givewell to figure out which charities do that most effectively. If you want to maximize qualys you'll find that givewell's analysis points to different causes. Of course you can use givewell's definition of good and just let them donate to their top charities, but the real value is all the white papers they publish which allow you to make informed decisions about how to maximize your own definition of good.


Well, agree to disagree on "the real value". I think your way of using it is incredibly valuable, but I think the more valuable thing is the ability to trust their recommendations, because that's a lot more scalable. For instance, if they only published the reports, refusing to make any recommendations, and directing people to use them only as a resource for their own research, then their impact would be orders of magnitude lower, because far fewer people would actually do that.

But if they only published their recommendations, without all the research and transparency, then they would not have the credibility as a trustworthy specialist to delegate one's research to. Both things are important.


Can anyone please explain to me what the train of thought was from effective altruism (of which I choose to know very little) to a "bunker/shelter" in Nauru? []

[] https://observer.com/2023/07/who-was-invited-sam-bankman-fri...


I had something long typed out, but it's basically their attempt to do "Foundation."[0] It's important for EA to survive the apocalypse, so that in any possible future, EA will be around to shepherd humanity to its apotheosis.

In this view, even an apocalypse is only a speedbump as long as EA survives it. They're looking toward a million years in the future where we've colonized the galaxy as trillions of machine intelligences or whatever. It's fine for 99% of humanity to be wiped out tomorrow as long as an order of Hari Seldon-mathematician-kings can keep humanity steered toward that future.

When you're humanity's best hope for transcending Earth and achieving techno-godhood, the best money you can spend is on yourself. See? Altruism.

[0] https://en.wikipedia.org/wiki/Foundation_series


I imagine Pol Pot had similar justifications for his agrarian utopia.


It's 100% the sort of thinking that can easily justify atrocities today in the service of achieving nirvana tomorrow. Very nearly everyone, even truly evil people, think of themselves as good. "You have to break a few eggs to make an omelet" can help to maintain the illusion.


Instead of bringing 'altruism' into socioeconomic decision-making, we should instead eliminate the concept of 'externalities' from the process.

If investors and corporations were forced to subtract the very real costs of 'externalities' from their bottom lines, this alone would lead to changes in behavior. E.g. fossil fuel externalities included widespread air and water pollution as well as the steady planetary warming trend. Ensuring those costs were born by both fossil fuel producers and users would encourage a transition to renewable energy. Yes, there are externalities involved in PV panel and wind turbine and battery manufacturing, but these costs are much, much lower than those associated with fossil fuel production and use.

Effective altruism was never much more than a fig leaf for an economic system that has tended to privatize the profits while socializing much of the costs via the fiction of externalities.


> Ord had pledged to give away all his earnings above £20,000. MacAskill followed suit.

I don't even know what that means. A pledge is like a terrible version of a futures contract: a nonenforceable one.

The pledger gets the benefits of PR immediately, with a vague promise to do something in the future; why do we even pay attention to this?


Because they have, in fact, given away much of their income.

Thousands of people have taken the Giving What We Can Pledge. I suppose they could all be lying but it seems unlikely. (Happy to send you my donation receipts if you like.)

Also, the Founder's Pledge (in which entrepreneurs commit a portion of their future proceeds from an exit) is legally-binding.


Why would anyone advertise the good they're doing? I become extremely suspicious in the presence of such individuals.


To put pressure on others to also do good.


SBF is and I think always has ben a con artist, masquerading as a clumsy nerd genius and using the EA trend was a coup de maître, but this is beside the point.

Really effective altruism should not focus on money or medicine, but on power.

Power imbalance is and always has been the main culprit behind all other kind of inequalities.

But there is an issue, contrary to money/wealth, power is zero sum, and is much less fungible.

Charity does not work with power, quite the opposite in practice.


Sure, EA has good goals, and even has a good philosophy. Lots of cheap mosquito nets preventing more malaria and dengue fever than more expensive options has become a modern day parable for good reason.

But, SBF and his ilk fall into the old habit of almost everyone when they get power and money. “Not now, but soon.” For someone big into altruism, I’m unaware of any philanthropy he did. Oh, he invested in companies to make himself richer, but that’s not the same.

It’s the same with all the billionaires and their “giving pledges”. “Not now, but soon.” And then of course they set of a foundation that pays their kids seven figures a year forever while occasionally throwing a ball for boneitis or naming something after themselves.

Best of all is when the altruism reveals them to be just engaging in their own hobbies. Like Marvel’s Sauron said, “But I don’t want to cure cancer! I want to turn people into dinosaurs!” Or as Jeff Bezos put it, "The only way that I can see to deploy this much financial resource is by converting my Amazon winnings into space travel. That is basically it."

It’s just cover for personal greed.

But soon. Soon. Just be patient.


Depending on your definition of philanthropy, SBF actually did a substantial amount of it.

The problem he was doing it with other people's money.


Well said. And of course they still want to stay super rich.


> It’s the same with all the billionaires and their “giving pledges”. “Not now, but soon.”

Warren Buffett and Bill Gates have both given away more than $50bn each. Not pledged - actually donated.


Googling net worth.

Gates: 110 billion USD (2023)

Buffett: 112.9 billion USD (2023)

Bulgaria GDP: 84.06 billion (2023)

Not

Nearly

Enough

Billionaires should not be exist


So SBF is a con-man who realized that it's easy to scam the EA community. All you need to do is learn their language and "pledge" to give away a lot of money. Then the donations will start pouring in.

I guess it's not that different from any other in-group. If you're seen as a trusted member, others will give you the benefit of the doubt as far as they can.


I always find that those who follow this effective altruism thing are very big on abstract thought.

It is accepted that philantropy is not 100% altruistic, leave it alone the tax deductions and tax shelter schemes but as a practice it's done to virtue signal or to feel like we are doing something meaningful or that we are changing people's lives.

Effective altruists find pleasure in the numbers turning around on a big scale and being celebrated as a philantropist, perhaps the Nobel Peace Prize? All stuff that would stimulate an abstract mind.

The opposite of an abstract mind is a mind that is in deep connection with the body and the 5 senses such mind knows that there is no way you can hug and see the change upon the faces many millions of people, whereas if you do good now you can hug the person you are helping right now and you can also see the change upon their faces instantly and you can appreciate it as opposed to seeing them as a number 30 years down the road.


> I always find that those who follow this effective altruism thing are very big on abstract thought.

That is probably true.

> The opposite of an abstract mind is a mind that is in deep connection with the body and the 5 senses such mind knows that there is no way you can hug and see the change upon the faces many millions of people

Ok? And turns out what feels right to do and what is right to do is not always the same thing. Imagine you are living in a medieval city ravaged by waterborne diseases. You can spend your time running around nursing the sick, cooking them soup, hoping that they recover. Or you can use the same energy to design a water treatment plant / sewer / aquaduct and prevent the spread of the disease in the first place.

One feels good, you are helping humans in their hour of need. Some die, but spend their last days in a bit more comfort due to your ministrations. Some recover. Hopefully more than would have recovered without your nursing.

The other is literal shit pumping. It does not feel good at all. And yet, we know that sanitary infrastructure is the key to the health of a city. You can cure max like a few hundred per week by nursing. But with a water treatment plant you can prevent illness in millions and millions.

Will you able to hug them and see change upon the face of them? No, probably they won't even know who you are, or that you helped them not being sick.


> > You can spend your time running around nursing the sick, cooking them soup, hoping that they recover. Or you can use the same energy to design a water treatment plant / sewer / aquaduct and prevent the spread of the disease in the first place.

I believe you need both the first responders and those who plan for the wider state of the emergency.

What I mean is that the so called effective altruists are a minority who cannot extract satisfaction from the work of first responders but want to have the same glory and recognition as they do, but given that it's not something spontaneous from those who are being helped (as they show gratitude to the first responders not the head of Foundation) they have to go to great lenghts to create these publicity and propaganda fueled philosophical currents, effective altruism being one of them.

It's not something new it's been around for a long time before effective altruism emerged, and specifically in the form of prizes such as

Key of the city for philantropic work

Philantropist of the year

Biggest philantropic donation of the year

Nobel Peace Prize

etc

etc


You have a very dark view of effective alturist. I know a few people who describe themselves as such and none of them are this twisted “cannot extract satisfaction from the work of first responders but want to have the same glory and recognition as they do” person.

They are just regular people with analytical minds. They would like good things to happen and they also recognise that resources are finite thus they ask how it is best to spend the finite resources to achieve the best bang for the buck.

That is basically the only comonality among them. Whatever machiavellian machinations you subscribe to them doesn’t seem to match my preceived reality.


> > I know a few people who describe themselves as such and none of them are this twisted “cannot extract satisfaction from the work of first responders but want to have the same glory and recognition as they do”

But if you read my comment what I said is not just limited to effective altruism which is the new kid on the block , I have a similar feelings for all the other stuff that I mentioned including the Nobel Peace Prize.

It's like sports, nobody cheers for the owner of the franchise or gets excited about meeting them or the General Manager, it's all about the players. That's the reason why certain owners and General Managers do lots of media and outreach to propagate their cult and get some love from the fanbase.


I understand what you are saying. There are rich people who have lot of money and wants to exchange some of that money to "legitimacy". They don't want to be just known as the rich dude, but as the "cool" rich dude. It's a thing. It was a thing forever ago and will be a thing for the foreseeable future. I understand that.

Where I think you go wrong is that you tar and feather everyone associated with effective altruism with that image. Do you want to say that rich people washing their consciousness with effective altruism is dubious? I agree with you on that. But what your words are saying is that everyone associated with effective altruism is dubious, and that is simply not true.


The delusion is simply in not realizing logical reasoning follows a log normal.


I never heard of effective altruism before the whole SBF trial.

Can someone fill me in / point me to somewhere with some historical context?


Wikipedia is unsurprisingly pretty good: https://en.wikipedia.org/wiki/Effective_altruism

I feel that this summary is useful as a history of the movement. https://forum.effectivealtruism.org/posts/XTBGAWAXR25atu39P/... I stopped identifying with the movement around the start of the second wave. I went from "EA is cool and great and you should read about it and donate to help save lives" to "EA has had some cool ideas but is now mostly focused with weird probability math that siphons money away from it's mission of saving lives", but I digress.



This podcast by Robert Evans (of Behind the Bastards fame) covers the history and thinking of the movement pretty well.

https://omny.fm/shows/it-could-happen-here/the-effective-alt...


TFA is as good a place to start as any.


> [Effective Altruism] is self-righteous in the most literal sense. Effective altruism as distinct from what? Well, all of the rest of us, presumably—the ineffective and un-altruistic, we who either do not care about other human beings or are practicing our compassion incorrectly.

> We all tend to presume our own moral positions are the right ones, but the person who brands themselves an Effective Altruist goes so far as to adopt “being better than other people” as an identity.

Those are the opening paragraphs from a much better article on the same subject: https://www.currentaffairs.org/2022/09/defective-altruism

Where this article meekly claims that "utilitarian calculations can be co-opted to justify extremely weird and potentially harmful positions" Nathan J Robinson correctly skewers the very fundamentals of utilitarianism:

> Patching up utilitarianism with a bunch of moral nonnegotiables is what everyone ends up having to do unless they want to sound like a maniac, as Peter Singer does with his appalling utilitarian takes on disability. (“It’s not true that I think that disabled infants ought to be killed. I think the parents ought to have that option.”)

As Robinson observes, the EA movement’s intellectual core is so poisoned by bad philosophy as to be unsalvageable.


EA has the same philosophical core as SV at large - find everything measurable and optimize the hell out of them. The philosophy ignores that something can be valuable even if it’s hard to measure.

I think the deep underlying desire is to make the world maximally comprehensible, but it all eventually becomes nightmarish entropy-maximizing humanity-destroying mechanistic systematization.


That maximally comprehensible bit is so relevant. We are limited by the modes with which we communicate, each of which trades some amount of expressiveness for some amount of signal speed.

The unfortunate truth is that in the competition between systems, those that achieve higher speeds frequently win in the battle to spread the signal. The executive report that reduces the business to a set of metrics one page long, the tweet that gets shared millions of times, the quippy campaign promise.

I've been ruminating on this problem for a long time, and I don't have any answers that I'm satisfied with. The closest I've come to identifying what-to-do is that perhaps if we manage to somehow reduce the global liquidity of communication and reintroduce friction into how ideas spread, perhaps the harder ideas will become more competitive relative to the fast and easy. Put more simply, if everything is made a little more hard, it would benefit the stuff that's already hard. That said, I say it's the "best" idea I've come up with because I recognize that it's still really fraught with disastrous side effects, potentially impossible, and leagues away from actionable.

The more I see, the more I understand Hanson's "Great Filter" argument. It all feels pretty bleak.


> Effective altruism as distinct from what? Well, all of the rest of us, presumably—the ineffective and un-altruistic

That's exactly right. All those ineffective doctors risking their lives being parachuted in the jungle or working in war zones, they're losers! And the bankers in their private jets, those are the altruists!

EA is a thought experiment in putting morals upside-down. It's ridiculous; what's fascinating is that it almost worked.

We have to thank SBF for having shown the world what EA actually is. A sinister joke.


I mean, when the concept of "effective altruism" first came onto my radar, when GiveWell came onto the scene, it was very clear what it meant to be distinct from: charities that use barely any of the funds they raise on their core charitable work. Those charities were "altruistic", but also ineffective, so the idea was, let's keep the altruism, but improve the effectiveness.

This was then and still is a very good idea! But then it's like that good but humble mission wasn't enough for a lot of the people interested in this, and they (in my view) sort of disappeared into a black hole of inscrutable philosophizing.


SBF also has very publicly very strange set of ideals. He believed that almost any amount of risk is good as long as he can convince himself that it’s somehow “morally net positive”


> Effective altruism as distinct from what?

As its most prosaic, effective altruism as in 20M mosquito nets instead of 2000 organ tranplants.

As the article makes clear, a strictly utilitarian approach to assessing "effective" is at severe risk of going off the rails.


Most people are innefective altruists. I don't see how mocking that EA's feel that way makes it untrue. Almost everyone wants to do good, but they don't put very much thought about how to do so effectively. EA the philosophy is only about encouraging those people to put a bit more effort into being effective.


Imagine an article: "Our organization found a better light bulb"

interpretation 1: (as in the quoted line) Oh, you think you're better than me, and you're criticizing my light bulb? Also you guys are super weird and I am disgusted by you, [other attempts at status-lowering]

interpretation 2: hmm, let me look at the data. If that bulb is better, I might use it! If not, maybe there's a way to fix it, or maybe I don't care. No big deal either way.

But like, who cares if some nerd philosopher writes an article? If they're wrong, the truth will come out. It's not hurting you. And if they're right, how about reading it and checking if you agree? And we already know social manipulators, psychopaths etc are attracted to intense new social movements. It doesn't mean the movement is bad.


The whole point is that you can't compare charity like you can lightbulbs. You can objectively state that a lightbulb is better because it produces the exact same light with half the electricity, but you can't objectively say that a charity helping children is better than one helping disabled adults, for example. That is a subjective moral decision, and it's fine to discuss it and debate it, but not fine to say "mine is objectively more effective, just look at my quality-adjusted life years spreadsheet".

This is not to say that all charity is perfectly interchangeable - for example, you can make very strong arguments that donating to a university with a $50 billion endowment is ineffective. Again there is nothing wrong with discussing these things, you just can't make a blanket statement that your particular philosophical framework is the best one.


> The whole point is that you can't compare charity like you can lightbulbs.

Often you can. Even charities that are attempting to do the same thing (e.g. stop people dying of malaria) vary wildly in their 'effectiveness' - which is really just a fancy way of saying e.g. 'how many people we prevented dying of malaria per $ spent'.

> but not fine to say "mine is objectively more effective, just look at my quality-adjusted life years spreadsheet".

...because? QALYs are an attempt to make these comparisons possible; it's not clear what your objection here actually is.


QALYs are a legitimate attempt to make these comparisons, but they are not the only method, and there is no way to determine an objectively correct method. It is not objectively true to say that a young life is more valuable than an old life. It isn't wrong, it just isn't objective. It is a subjective personal belief.

Again I am not saying that there is no such thing as effectiveness, and I agree that charities doing the exact same thing can be reasonably compared.


I feel your sincerity but I don't know why you're sure of that. I think it is pretty easy to construct situations where one course of action is better than another. For example helping more innocent people vs hardened people with no remorse. Or within the same groups, focusing more on disease eradication against targets which are more harmful vs those diseases which are only small inconveniences.

I think EAs might just be trying to apply that same style reasoning to areas which aren't commonly viewed as clear yet. Ie say hey, you may not be familiar with it, but there is some information suggesting that between these two conventional charities, here is an analysis with assumptions laid out that suggests that contributuons may not actually be equivalent!

You can still take it or leave it.

I don't think they're saying they know everything perfectly. Just that they think it's important to push into the murky areas.

I think the issue with blanket statements is on the listener. If I have an idea and somewhat boldly say hey all, I think my idea is better than anything before - and here's why, the "get mad and attack" response, even when sincere and correct, masks so many nonsincere responses. That is, lots of people who claim to be serious analysts frequently use the "attack" response, or the "I'm so tired of this" response. Sometimes that is justified but it's also a crutch and tends to harden battle lines.

Personally I'd just figure out if I agree with them (and I don't) and why, and move on. I appreciate you not piling on to them, too.


And even lightbulbs have to make tradeoffs between efficiency and lifespan. Yeah, all else being equal the efficient bulb is better, but it's quite rare that all else actually is equal.

https://www.youtube.com/watch?v=zb7Bs98KmnY


The article could be interpreted as advertising, if the person is perceived to have an interest. If the "effective altruism" people seem to enjoy being labeled as such (as opposed to just generic altruist/normal human beings) the articles talking about their method start sounding like advertising.

I like the utilitarian idea and using evidence and reason, but as an engineer I apply it anyhow for many things, so giving it a specific name for the case of "altruism" seems ... strange. Never crossed my mind to write an article "everybody should be an engineer because then you would use more evidence and reason in your daily life".


Effective Altruists are sociopaths. Never hire them, don’t be friends with them, protect your family and wealth. Very very dangerous people. They have programmed themselves into sociopathy as a moral justification and life philosophy. Unbelievably poisonous and dangerous.


I'd personally preferentially hire and befriend effective altruists and go out of my way to avoid people who say things like this (particularly with usernames like that).


The EV calculation of hiring sociopaths is too low


What is "Effective Altruism" but a sexy re-branding of the principles and behaviours that religions and other cultural vessels have attempted to imbue in people for thousands of years now?

If your goal is truly to do good, it's really not that hard. If your goal is to feel good, that usually requires these kinds of wrappers, and with them will come the charlatans that sense opportunity in exploiting your insecurities.


This is not fair to the core idea of EA.

There are LOTS of ways to do good in the world, and sure, religion and various other cultural vessels encourage us to pursue them.

The original idea behind EA was to provide a way to choose between different ways of doing good. The fact that you wanted to and were going to "do good" was assumed.


Religion really does not focus on maximizing good at all. They are overwhelmingly kantian, focusing more on rules that keep your community strong whereas EA has a much larger focus on causes outside your community.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: