Why I Am Sort of an Effective Altruist
Yes to the core idea, but I'm more mixed on it as actually practiced
Note: This post is likely to be truncated in e-mail inboxes; click through if you are trying to read the whole thing. Sorry, it’s quite long and has a bunch of footnotes.
During my senior year of high school, I read a book collecting Peter Singer’s columns, and then a few of his other works. Around the same time, I also discovered the website 80000 Hours’ college and career advice. And I also took part in a science bowl tournament with extra questions about existential risk. These seemed basically unrelated to me — a drowning child in a pond has little to do with grey goo or information about journalism. But I learned that they were all part of a movement called effective altruism.
Defining exactly what effective altruism is is difficult, unless you’re willing to give up and say that it’s whatever people who identify with effective altruism decide to believe. If we don’t want to do that, Wikipedia’s definition is a good as starting place as any:
“Effective altruism (EA) is a 21st-century philosophical and social movement that advocates impartially calculating benefits and prioritizing causes to provide the greatest good.”
However, after even more engagement with effective altruism, I find myself preferring a slightly different answer, one loosely inspired by the definition put forward by effective altruist philosopher/public figurehead Will MacAskill. I’d say that effective altruism is the practice of acting on a belief in a significant moral responsibility to help others as effectively as possible, without regard to personal ties.
There are a few important components of this definition. You have to actually do things to be an effective altruist, not just say you believe in it.1 There are many reasonable ways to draw the line among others, with animals in particular on the edge, but it’s distinct from the responsibilities we bear to people because of direct personal ties.2 And, contrary to some people’s stereotypes, effective altruism does not mandate acceptance of utilitarianism, but rather a broader view called beneficentrism, which can be supported with arguments from many philosophical and religious traditions.
I think there’s a pretty easy argument for this view with minimal philosophical content. Intuitively, we think certain things, like pain, death, and extreme poverty, are bad. We also have an intuition that it is good to make bad situations better, and that, the more the situation improves, the better it is. This is, more or less, beneficentrism. Where effective altruism comes in is that it argues that we can, in fact, make bad things better. There are quite a number of charitable interventions that have a proven track record of significantly improving the lives of people in extreme poverty, and a major portion of effective altruism is dedicated to identifying and supporting these.3
While I generally agree with this argument, I have my quibbles with how it is put into practice. These aren’t inherent failures of effective altruism, but it can overemphasize easily measurable interventions and neglect political change and on-the-ground knowledge. But those concerns aren’t why I’m not sure that I’m an effective altruist by my own definition. My doubts come as to whether I actually put my views into practice significantly enough to count as an effective altruist.
There are some common ways to operationalize effective altruism, such as donating 10% of one’s income and selecting a career based on its contributions to beneficentrist societal good.4 I didn’t select my current job to maximize impact, but I am working on a topic that I think is important, and developing useful skills for altruistic careers.5 I’ve made only insignificantly small donations as a student, but I do hope to donate more now that I have an actual job. I’m not sure I’ll make it to the 10% threshold — I also want to build up some savings — but I’m planning to try.6
Thinking about this purely philosophically, I’d consider myself an effective altruist wannabe or something of that sort. I’m not quite at the point where I actually think I’ve done sufficiently effective and altruistic deeds, but I do think I’ve set myself up well to get there in a few years.7 On the other hand, even a $100 donation is 20 bednets to people at high risk of malaria infection. If I made some kind of major commitment, like taking the Giving What We Can pledge to donate 10% of income, I think that would definitely make me an effective altruist at least in this sense in the term.8
The elephant in the room is that there’s effective altruism as a philosophy, but there’s also effective altruism as a community. For the sake of distinction, I’ll call this community EA.9 There’s a lot to like there, but it also has some deep problems. One day, I hope I’ll feel right calling myself an effective altruist philosophically, but learning more about the community around effective altruism hasn’t quelled my doubts that the idea of an EA community is meaningless or even harmful.
Despite that, there’s a good bare-bones argument for EA being beneficial. Most ideologies cluster together in some way, whether that’s through churches, protests, parties, or forums. EA has an active and well-managed forum, social media communities in widely ranging states of activity, various college groups, and supposedly even in-person circles that I know little about.
Through all this discourse, EA has simultaneously emerged with three core wings and a flotilla of minor views advocating for radically different causes. If you think of effective altruism as moral actions towards current humans, there’s global health and development.10 If you think future humans, or humanity as a species, needs attention more urgently, there’s longtermism, which at this point mostly means AI risk. And if animals trump whatever other view, there’s animal welfare.
Personally, I lean towards global health and development, as evidenced earlier in this essay. Yet none of the other wings strike me as completely unreasonable, although the arguments for prioritizing another group over the global poor are more complex than I can really shoehorn in here.11
But I’ll say this: people are against hurting dogs, and dogs share most of their features which could plausibly be morally relevant with pigs and other animals — even insects — that are heedlessly maltreated in the factory farming industry. On the contrary, even animal welfare advocates admit that it’s really hard to know what, if anything, animals are experiencing and whether it’s morally relevant, depending on many difficult philosophical questions. Trying to figure out how to help animals easily leads to conclusions as varied as supporting meat-eating and supporting global poverty.
Similarly, I’m surprised that arguments that drop the intuition that something must be good or bad for an actual, existing, person to be good or bad can appeal to so many people. But even less dramatic projections of the AI future can have major economic effects and deadly real-world consequences.
However, it’s difficult to argue that the three views form a coherent community. Trying to figure out how to help animals easily leads to conclusions as varied as supporting meat-eating and supporting global poverty. Most people linked with EA want to slow down the pace of AI developments to help government and society adjust, but some think the opposite.12 It’s akin to lumping together Democrats opposed to book bans, Republicans opposed to corporate DEI verbiage, religious sects opposed to mandatory Pledge of Allegiance recitations, and Libertarians opposed to all of the above, just because they all cite free speech somewhere in making their arguments.13
The incoherency worsens once you consider that EA is not just advocates of the three main causes, but also fans of a host of other views. A large majority of self-identified EAs are left-of-center, but there’s a loud minority who support Trump, and a consistent problem with boosting dubious race “science.”14 Claims that Musk and his acolytes are from EA are overblown but not entirely made up; many EAs have actually strengthened the opposition to Trump’s disastrous USAID cuts.15
On one hand, I don’t want to identify as an effective altruist if it gives any cover to ludicrously harmful right-wing schemes that have killed tens of thousands of people already. On the other hand, I might not be much of an effective altruist, but I’m still doing a vastly better job living up to the movement’s values than Elon Musk is. Why should I let the movement be associated with him? Furthermore, I haven’t actually proven that identifying with, or engaging with, the EA community affects its political positioning.
The problems with EA don’t stop there. There are sexual harassment and abuse problems. There’s employee mistreatment, although I’m not sure it’s worse than elsewhere. There’s deep entanglements with the cryptocurrency movement and all the harms that that brings along. There’s a persistent pattern of creating today’s AI problems in an effort to fix yesterday’s; EA funding contributed to the rise of OpenAI and Anthropic, plausibly hurting the very cause it was supposed to help.16
This is all bad, but there’s also a tremendous amount of good that EA has already done.17 It’s hard to know exactly, but effective altruists have probably saved hundreds of thousands of people from dying of malaria. They’ve increased and improved the city of Zürich’s global development aid.18 They’ve founded charities reducing lead paint worldwide. They’ve freed millions of hens from tight and confining cages.19 They’ve funded Nobel Prize-winning work on proteins with important implications for flu vaccines.
So I can’t dismiss EA as fake effective altruism. It’s a community that is trying to put views similar to, but not the same as, my own into practice. When I weigh up the good, the bad, and the simply pointless, I have to conclude that some engagement with EA better prepares me to achieve my goal of becoming an effective altruist. I’ve learned a lot from reading the EA forum and my college EA group, when there was one, as is amply evidenced by the number of links in this piece that I found there.20 I don’t see any problems with further discussions with EAs, either; I don’t agree with everything the average member believes, but we tend to share views in common.
There are other scenarios where I do worry that I might be shifted towards prioritizing EA, instead of effective altruism. An EA conference could be interesting, but I worry that I’d feel pressure to agree with views I don’t actually hold, think I’m doing something useful just by participating in the community, or be convinced through sheer repetitive exposure to bad arguments. These concerns are particularly acute when it comes to the possibility of applying for jobs in EA proper; good thing they’re notably hard to get! And calling myself an EA in certain contexts might put me in the position of defending a community I only partially believe in.
But I can’t will into existence my ideal effective altruist community. I can only make do with the current community formed around the perspective of effective altruism, which is useful in spite of its many flaws.
So the answer to whether I am an effective altruist, and whether you should be one, is disappointingly contextual. Yes by some definitions, no by others; yes in some scenarios, no in others. The term EA-adjacent comes in for a lot of deserved mockery. But it fits, more or less: I want to keep myself adjacent to EA, and I hope to become more than adjacent to effective altruism one of these days.
Arguably, there should be a proviso that you can make a sincere attempt to do things, even if it doesn’t work out because of youth or poverty or whatever else. I don’t think including or excluding this should affect any of the conclusions in the rest of this piece.
My impression is that almost every self-identified effective altruist acts as if they bear responsibilities due to personal ties, but some of them think that ethics does not specifically tell us to act that way. Some also wouldn’t like the word “responsibilities”, but that’s getting too deep into philosophy for my layman’s perspective.
And also effective charities supporting animals, although those tend to get more into the philosophical weeds. Note that some of the links are to charities that themselves have a good track record, and others where the general subject of the charity has been demonstrated to be effective.
There’s also volunteering and part-time work, but impactful positions there do not appear to be particularly common.
My job up until about a month ago, which in large part consisted of selling people bread, is much more clearly non-altruistic. I’m being intentionally vague about this current job both to keep my work and personal lives separated and because I started it very recently.
Well, I also need to figure out where to donate to given my beliefs, which is a much harder question than what my beliefs are.
No, this isn’t inherent in being young. At least one of the charities founded by the major effective altruist organization Charity Entrepreneurship was founded by an undergraduate student.
There’s also probably some good in writing for the EA forum, but it’s too fun for it to count as a real moral commitment. In some ways I feel the same about economics research — I haven’t found anything particularly useful yet, but even if I did, I enjoy enough of the research process that it can’t prove my altruistic motivations.
As far as I know, this is something I have made up, although use of “EA” as the short form of effective altruism is anecdotally correlated with more focus on the community.
Whether for purely philosophical considerations, i.e. rejecting philosophical longtermism, or for practical reasons.
This is true of a lot of this piece; effective altruism is a notably complicated subject and however many words this is is nowhere near enough to do it justice. Luckily, there are plenty of other people willing to write long articles about anything related to cause prioritization.
Mechanize came to exist through EA funding indirectly, but I’m not sure if they would actually call themselves EAs.
This version of the analogy is my own, but I vaguely remember seeing similar analogies somewhere I can’t find.
And not just political views; strong suffering focus, views on “negative lives”, and moral anti-realism can also cause EAs to have a totally different idea of what effective altruism means in practice than I have.
You’re welcome to form your own views about how sincere the Trump supporters are in their various ideological positions.
This is a common claim but I can’t find a canonical argument that it is true. Somebody who prioritizes AI very strongly would be well-placed to make that argument.
There’s also a bunch of EA efforts that strike me as not awful but also short of the community’s own standards, in particular in forecasting.
Notably, this one involved politics, systemic interventions, and on-the-ground knowledge, all factors EA is often bad at. I’m not very confident in this, but maybe that’s why EA seems to rarely discuss it and, as far as I can figure out, has never tried to replicate it?
This one in particular was a collaboration with many people who don’t identify with effective altruism.
Yes, I’ve learned a lot even from people with whom I profoundly disagree on other EA-related issues.
> Claims that Musk and his acolytes are from EA are overblown but not entirely made up; many EAs have actually strengthened the opposition to Trump’s disastrous USAID cuts.15
I am not sure what to make of this sentence; the two halves seem to be pushing in opposite directions.