杏吧原创

By Calum Carmichael.

(The full, five-part series is downloadable as a pdf: What Can the Philanthropic Sector Take from the Downfall of Samuel Bankman-Fried and His Ties to Effective Altruism, a five-part series by Calum Carmichael (2023).)

Part 4: Questioning the analytical methods of Effective Altruism

Introduction

Late in 2022 the bankruptcy of FTX International and the criminal charges brought against the crypto entrepreneur Samuel Bankman-Fried (SBF) re-focused and intensified existing criticisms and suspicions of Effective Altruism (EA) 鈥 the approach to philanthropy with which he was closely associated. Part 1 of this series placed those criticisms under seven points: two each for the philosophical foundations and analytical methods of EA, and three for its ultimate effects. Part 2 described EA: its origins, ethos, analytical methods, priorities and evolution. focused on the two criticisms and their rejoinders that apply to its philosophical foundations, and part 5 will do the same for the three criticisms that apply to its ultimate effects. Here in part 4, I focus on the two criticisms that apply to the analytical methods. Of those, I pay particular attention to the first, given that it introduces material picked up by the other two. Before discussing each, I provide several references to it made following the downfall of SBF.

Throughout, my goal isn鈥檛 simply to present contending views on the foundations, methods and effects for EA, but to derive from them implications and questions for the philanthropic sector as a whole 鈥 so that regardless of our different connections to the sector, we can each learn or take and possibly apply something from the downfall of SBF and his association with EA.

Criticism 3: By relying on impartial reason to identify the philanthropic interventions that will do the most good, EA idealizes a methodology that quantifies and compares the value and probabilities of alternative and highly-speculative outcomes 鈥 thereby mistaking mathematical precision for truth and ignoring important qualities of human life and flourishing that are not readily quantified.

鈥淓ffective altruism, perhaps because it comes out of the hothouse of the Oxford philosophy department, is a bit too taken with thought experiments and toy models of the future. Bankman-Fried was of that ilk, famously that he would repeatedly play a double-or-nothing game with the earth鈥檚 entire population at 51-to-49 odds.鈥 —, Dec 2022

鈥淭he question of how to do good cannot be divorced from questions of what is just and where does power reside. This is a matter of morality: people concerned with doing good should be thinking about themselves not just as individual investors but as citizen-participants of systems that distribute suffering in the world unequally for reasons that are not natural but largely man-made.鈥 —, December 2022

As described in part 2 of this series, the analytical methods of EA rely on frameworks that distinguish and rank alternative causes and interventions. Both frameworks emphasize the quantification of outcomes and their probabilities. The first line of criticism against the analytical methods focuses on how the emphasis on quantification introduces types of that could either sideline certain matters relevant to well-being, accommodate subjectivity particularly in risk assessment or downplay the uncertainties and debates around 鈥渄oing the most good鈥 鈥 the stated objective of EA.

Using to compare the cost effectiveness of alternative causes and interventions automatically favours projects where data can be collected and causality tested: hence, matters of health in controlled environments get attention, whereas matters such as justice or self-determination are overlooked, as noted in part 2. Even for matters of health, preliminary studies based on, say, randomized controlled trials, provide . Their results are specific to the scale and context of the trials and don鈥檛 readily generalize and transfer to other contexts. Moreover, the results don鈥檛 capture the experiments鈥 long-term effects that could counteract any positive ones observed early on.

Samuel Bankman-Fried in 2022

Samuel Bankman-Fried

, 鈥淸t]rying to put numbers on everything causes information loss and triggers anchoring and certainty biases鈥. Thinking in numbers, especially when those numbers are subjective 鈥榬ough estimates鈥, allows one to justify anything comparatively easily, and can lead to wasteful and immoral decisions.鈥 Expected value calculations are particularly prone to this by accommodating personal levels of risk tolerance as well as value judgments about outcomes and their probabilities. Hence, they could if upsides are emphasized and downsides disregarded 鈥 something all the more likely in the hands of someone like SBF who risk:

鈥淸T]he way I saw it was like, 鈥楲et鈥檚 maximize EV: whatever is the highest net expected value thing is what we should do鈥欌. I think there are really compelling reasons to think that the 鈥榦ptimal strategy鈥 to follow is one that probably fails 鈥 but if it doesn鈥檛 fail, it鈥檚 great. But as a community, what that would imply is this weird thing where you almost celebrate cases where someone completely craps out 鈥 where things end up nowhere close to what they could have been 鈥 because that鈥檚 what the majority of well-played strategies should end with.鈥

Indeed, leaders of the EA community could claim similar cover for their decision to tie their to SBF in the first place: someone known as 鈥渁n aggressive businessman in a lawless industry鈥.

The focus on quantification contributes to a methodology susceptible to not only narrow and reckless decisions, but also decisions that are misdirected or conflictual because of the confusion around what 鈥渄oing the most good鈥 actually entails. On that score, EA has . If it remains exacting in how to define and measure 鈥渢he most good,鈥 then it increases the chances of repelling most donors and simply being wrong. Alternatively, if it offers greater latitude 鈥 say, encouraging donors 鈥渢o be more effective when we try to help others鈥 or to 鈥渕aximize the good you want to see in the world鈥 鈥 then it becomes . 鈥淚 mean, who, precisely, doesn鈥檛 want to do good? Who can say no to identifying cost-effective charities? And with this general agreeableness comes a toothlessness, transforming effective altruism into merely a successful means by which to tithe secular rich people鈥.鈥

Holden Karnofsky

Holden Karnofsky, a thought leader in the EA community, voiced mild concerns that utilitarianism could weaken the trustworthiness of effective altruists.

Even within the EA community there are thought leaders 鈥 Holden Karnofsky being one, Toby Ord another 鈥 who have reservations about ethical theories and analytical techniques that downplay the uncertainties and disputes around 鈥渢he most good.鈥 As explains: 鈥淓A is about maximizing how much good we do. What does that mean? None of us really knows. EA is about maximizing a property of the world that we鈥檙e conceptually confused about, can鈥檛 reliably define or measure, and have massive disagreements about even within EA. By default, 鈥. I think it鈥檚 a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation.鈥

Toby Ord

Toby Ord

As adds: 鈥淸E]ven if you were dead certain 鈥 it would be a problem if you are trying to work together in a community with other people who also want to do good, but have different conceptions of what that means 鈥 it is more cooperative and more robust to not go all the way.鈥

Rejoinders to criticism #3

There are rejoinders to the criticisms of what the analytical methods of EA sideline, accommodate or downplay. With respect to their sidelining broader conditions like justice or freedom, as noted in EA supports such things indirectly where there are ties to measurable indicators: say, countering inequality by alleviating the effects of poverty; or promoting justice through criminal justice reform. That said, its methods steer clear of initiatives that wouldn鈥檛 improve well-being on terms and at levels greater than the alternatives at hand. This is a strength, not a weakness. Although not perfect, EA鈥檚 methods to 鈥渟ift through the detritus and decide what moral quandaries deserve our attention. Its answers won鈥檛 always be right, and they will always be contestable. But even asking the questions EA asks 鈥 How many people does this affect? Is it at least millions if not billions? Is this a life-or-death matter? A wealth or destitution matter? How far can a dollar actually go in solving this problem? 鈥 is to take many steps beyond where most of our moral discourse goes.鈥 By that discourse, EA provides a service to the philanthropic sector by 鈥渇orcing us all to rethink what philanthropy should be.鈥

In an article in “Town & Country” magazine, Mary Childs writes that Effective Altruism picked up acolytes like Facebook billionaire Dustin Moskovitz, LinkedIn co-founder Reid Hoffman, and Twitter’s Elon Musk. “To put it in pop culture terms: if Will Sharpe鈥檚 character on ‘The White Lotus’ existed in real life, he would be an effective altruist.”

For donors, by what do you gauge the extent to which your contribution improves the lives of others? Are these 鈥減resentable, articulable, reproducible鈥? For charitable organizations, what if anything makes you more deserving of donations than other organizations? How can that be demonstrated apart from storytelling, image promoting and heart-string tugging? To be sure, in recent years many in the charitable sector have pushed toward greater consistency in measuring impact. But this is usually only within a cause or at an organizational level. EA that 鈥減eople think about how we decide on the causes themselves鈥. That type of thinking about charitable giving is becoming more public, and that鈥檚 something an effective altruist can take some credit for.鈥 But taking credit doesn鈥檛 necessarily mean receiving thanks. Indeed, some of the criticism toward EA鈥檚 analytical methods may simply come from or put on the defensive: say, 鈥渄onors that respond to causes that move them, regardless of their cost-effectiveness鈥 or 鈥渁ctivists committed to the cause of social justice鈥 who feel offended by being asked or expected 鈥渢o demonstrate that their work is effective.鈥 Those on the defensive may also include the causes and organizations that EA leaders have used as examples of ineffectiveness or excess: , , , or for widely-reported disasters.

With respect to tolerating the value judgments that could skew decisions, EA is at least relatively transparent in what lies behind its decisions, thereby allowing others to question or challenge them and their associated risks. At the end of the day, however, judgements 鈥 ideally, defensible ones 鈥 need to be made. Sure enough, would agree there are instances where it鈥檚 worth risking failure and perhaps ending up with nothing. But in taking that position, they鈥檙e being consistent with made to the philanthropic sector as a whole by those who see greater risk-taking as necessary if the sector is to learn and make greater change, whether in , the , the or . As for SBF鈥檚 bravado over extreme risk-taking, it would be and to attribute such recklessness to EA more broadly: 鈥渋f SBF went to [William] MacAskill, or any of his largesse鈥檚 other beneficiaries, and asked, 鈥楧o you think I should make incredibly risky financial bets over and over again until I鈥檓 liquidated or become a trillionaire?,鈥 they would have said, 鈥楴o, please do not bankrupt our institutions.鈥欌

And finally, with respect to downplaying the uncertainties and debates around 鈥渢he most good,鈥 in fact EA recognizes and responds to such things. Admittedly, the focus remains on the needs of the beneficiaries and the cost effectiveness of alternative interventions to address 聽them. But within those confines, the original EA organization Giving What We Can offers a that allows donors to support the high-impact causes and organizations that best correspond to their individual views on what constitutes the most good 鈥 whether these involve, for example, improving human well-being or animal welfare, alleviating climate change and its effects or averting catastrophic global risks in the future. And the EA foundation applies what it calls 鈥渨orldview diversification鈥: where 鈥溾 refers to 鈥渁 set of highly debatable (and perhaps impossible to evaluate) beliefs that favor a certain kind of giving鈥 or ; and 鈥溾 means 鈥減utting significant resources behind each worldview that we find highly plausible鈥. Parentheses and italics are original. In other words, the foundation deliberately puts its eggs in multiple baskets 鈥 both to avoid rapidly diminishing returns from supporting only one or a few causes and to avoid estranging segments of the EA community that favour different worldviews.

Criticism #3 as it applies to longtermism

Critics see extreme forms of affecting if not motivating EA鈥檚 pursuit of so-called longtermist causes and interventions 鈥 ones that, as described in part 2, seek to reduce the 鈥渆xistential risk鈥 or 鈥溾 posed by events or developments that 鈥渨ould either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.鈥 Such include nuclear war and climate change, but now feature whether natural or bio-engineered as well as malicious artificial (ASI) 鈥 a not-yet-realized state of AI that exceeds on all fronts the level of intelligence of which humans are capable.

For longtermist causes, the particular forms of methodological blindness emerge from the three-fold that: 鈥淔uture people count. There could be a lot of them. And we can make their lives better.鈥 First, consider the implications of 鈥渇uture people count.鈥 Longtermists envisage people in the future as being both human and digital. In keeping with utilitarianism, they seek to increase the total well-being of future populations 鈥 a perspective known as the 鈥溾. Toward that end, they favour not simply protecting those populations but increasing them as long as the average well-being per person, whether human or digital, doesn鈥檛 fall so quickly as to reduce the total. For that reason, argues that 鈥淸i]f future civilization will be good enough, then we should not merely try to avoid near term extinction. We should also hope that future civilization will be big鈥. The practical upshot of this is a moral case for space settlement.鈥 Thus, from a longtermist perspective, a population of 10 billion where each member flourishes with a high individual well-being of 100 is only half as well off as a population of 1000 billion where each member barely survives with a low individual well-being of 2. Heavily populated dystopias that are 鈥済ood enough鈥 are better than less populated utopias: a ranking known as the 鈥 conclusion鈥 in population ethics, but accepted as a matter of logic by longtermists.

Nick Bostrom

Nick Bostrom

Now consider 鈥渢here could be a lot of them.鈥 鈥 introduced in part 2 as the founding and current Director of the 鈥 provides a range of estimates. For biological 鈥渘euronal wetware鈥 humans, his projections for future life years range from 1016 if biological humans remain Earth-bound, up to 1034 if they colonize the 鈥渁ccessible universe鈥. Alternatively, if one assumes that such colonization takes place and that future minds can be 鈥渋mplemented in computational hardware鈥, then there could be at least 1052 additional life years that include where 鈥渉uman whole brain emulations 鈥 live rich and happy lives while interacting with one another in virtual environments鈥. Bostrom, as well as , assume that such digital minds will have 鈥渁t least comparable moral status [as] we may have.鈥

Finally, consider 鈥渨e can make their lives better鈥 and the acceptable cost of doing this. Turning again to and his projections, applying his lowest estimate of 1016 biological human life years left to come on Earth, he concludes that 鈥渢he expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives鈥 today. Alternatively, assigning 鈥渢he more technologically comprehensive estimate of 1052 human brain-emulation subjective life-years鈥 a mere 1% chance of being correct, he concludes that 鈥渢he expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives鈥 today. Italics are original.

Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell talking about AI and existential risk. Photo taken at the Effective Altruism Global conference, Mountain View, CA, in August 2015.

Nick Bostrom, Elon Musk, Nate Soares, and Stuart Russell talking about AI and existential risk. Photo taken at the Effective Altruism Global conference, Mountain View, CA, in August 2015.

Thus by assuming future populations will be and should be massive, widening the notion of what constitutes a person in the future, people who might exist in the future should be counted equally to people who definitely exist at present, and there鈥檚 no fundamental moral difference between saving actual people today and bringing new people into existence 鈥 longtermists argue that the loss of present lives is an acceptable cost of increasing by even a miniscule amount the probability of protecting future lives.

provide a specific example of this trade-off. Working with a projection of only 1014 future life years at stake and acknowledging that 鈥淸t]here is no hard quantitative evidence to guide cost-effectiveness estimates for AI safety鈥, they nevertheless propose that 鈥$1 billion of carefully targeted spending [on ASI safety] would suffice to avoid catastrophic outcomes in (at the very least) 1% of the scenarios where they would otherwise occur鈥. That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion lives 鈥 far more than the near-future benefits of bed net distribution鈥 that would prevent deaths from malaria. Parentheses are original. As they see it, such calculations 鈥渕ake it better in expectation 鈥 to fund AI safety rather than developing world poverty reduction.鈥 Or as put by , because 鈥渋ncreasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy鈥 rather than 鈥渇ritter it away on a plethora of feel-good projects of suboptimal efficacy鈥 such as providing bed nets.

 Canadian philosopher Jan Narveson (who also coined the phrase 鈥渢otal view鈥). According to that principle: 鈥淲e are in favour of making people happy, but neutral about making happy people.鈥

Canadian philosopher Jan Narveson coined the phrase 鈥渢otal view鈥: 鈥淲e are in favour of making people happy, but neutral about making happy people.鈥

Such implications 鈥 and the assumptions and methods used to support them 鈥 have attracted criticism from both within the EA community and outside it. For example, how and to what extent future people should count are in population ethics. There鈥檚 on whether digital 鈥渂rain emulations鈥 would or could be morally comparable to human or other biological forms of sentient life 鈥 and hence whether their numbers or well-being should be included in population projections. And although longtermists endorse the 鈥渢otal view鈥, others don鈥檛. Opposing it, for example, are those who endorse the principle of 鈥渘eutrality鈥 made popular by Canadian philosopher (who also coined the phrase 鈥渢otal view鈥). According to that principle: 鈥淲e are in favour of making people happy, but neutral about making happy people.鈥 Neutrality can be with the principle of 鈥減rocreation asymmetry鈥 whereby there鈥檚 no moral imperative to bring into existence people with lives worth living (i.e., neutrality), but there鈥檚 moral imperative not to bring into existence people with lives not worth living. To be sure, these principles and their implications are themselves . But together, they provide a credible e for concluding that 鈥渢he longtermist鈥檚 mathematics rest on a mistake: extra lives don鈥檛 make the world a better place, all by themselves鈥. We should care about making the lives of those who will exist better, or about the fate of those who will be worse off, not about increasing the number of [sufficiently] good lives there will be.鈥

In a recent open letter, 11,258 scientists agreed that the world鈥檚 population be stabilized and that public policies 鈥渟hift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality.鈥

Such a non-longtermist conclusion supports those who tie human survival not to burgeoning populations somehow maintained by , but rather to making . According to the signed by 11,258 scientists from 153 countries, this would require that the world鈥檚 population be 鈥渟tabilized 鈥 and, ideally, gradually reduced鈥 and that public policies 鈥渟hift from GDP growth and the pursuit of affluence toward sustaining ecosystems and improving human well-being by prioritizing basic needs and reducing inequality.鈥

When it comes to the numbers of people that could exist in the future and our capacity to make their lives better, the longtermists use to justify their position 鈥渋s always the same: let鈥檚 run the numbers. And if there aren鈥檛 any numbers, let鈥檚 invent some.鈥 projections of many trillions over the next billion years rest on assumptions about space settlement, extra-terrestrial energy sources and digital storage capacity that he gathers from a range of literatures, including science fiction. These assumptions are questionable. Moreover, the time horizon is itself questionable 鈥 given the imminent threats posed by nuclear war and climate change. As put by one , himself an effective altruist, 鈥渙nce you think like the world as we know it has a likely time horizon shorter than one thousand years, this notion of, well, what we will do in thirty thousand years 鈥 just doesn’t seem very likely. The chance of it is not zero. But the whole problem 鈥tarts looking 鈥 less like a probability that should actually influence 鈥 [our] decision making.鈥

Radioactive smoke billows from the atomic cloud over Hiroshima, Japan, in 1945. Part of the problem with longtermists is that they’re nearly clueless about the distant future. Many of the risks the world is worried about today, including nuclear war and climate change, emerged only in the last century.

The ways by which longtermists plan to make future lives better are themselves dubious. Part of the problem lies with our 鈥溾 both about what the distant future will be like and about what differences we could possibly make to that future by our actions today. 鈥, we cannot know what will happen 100 years into the future [let alone 30,000 years] and what would be the impact of any particular technology. Even if our actions will have drastic consequences for future generations, the dependence of the impact on our choices is likely to be chaotic and unpredictable. To put things in perspective, many of the risks we are worried about today, including nuclear war, climate change, and AI safety, only emerged in the last century or decades.鈥

Even when it comes to mitigating specific risks in the nearer future, it鈥檚 not clear what are the best actions in terms of their feasibility and effectiveness, let alone how the philanthropy of EA can uniquely advance those actions. Consider the threat of nuclear war. As reasoned by the mentioned above: 鈥淚’m very keen on everyone doing more work in that area, but I don’t think the answers are very legible 鈥 [and] I don’t think effective altruism really has anything in particular to add to those debates.鈥 With preventing malicious ASI, 鈥淚 don’t think it’s going to work. I think the problem is if AI is powerful enough to destroy everything, it鈥檚 the least safe system you have to worry about. So you might succeed with AI alignment on say 97 percent of cases, 鈥 [b]ut if you failed on 3 percent of cases, that 3 percent of very evil, effective enough AIs can still reproduce and take things over鈥.鈥 Moreover, 鈥渁rtificial intelligence issues from a national perspective, not a global perspective. So I think if you could wave a magic wand and stop all the progress of artificial intelligence, 鈥. you never have the magic wand at the global level.鈥

Maybe giving $1,000 to the Machine Intelligence Research Institute will reduce the probability of Artificial Intelligence killing us all by 0.00000000000000001. Or maybe it鈥檒l make it cut the odds by only a fraction of that. Trying to reason with such small probabilities is nonsense.

Moreover, the willingness of longtermists to sacrifice large payoffs with probabilities arbitrarily close to one in order to pursue enormously larger payoffs with probabilities arbitrarily close to zero amounts to 鈥溾 鈥 a willingness that could possibly be justified using expected value calculations for which the numbers were reliable. But the numbers aren鈥檛: they 鈥渁 false sense of statistical precision by slapping probability values on beliefs. But those probability values are literally just made up. Maybe giving $1,000 to the will reduce the probability of AI killing us all by 0.00000000000000001. Or maybe it鈥檒l make it only cut the odds by 0.00000000000000000000000000000000000000000000000000000000000000001. If the latter鈥檚 true, it鈥檚 not a smart donation; if you multiply the odds by 1052, you’ve saved an expected 0.0000000000001 lives, which is pretty miserable. But if the former’s true, it鈥檚 a brilliant donation, and you’ve saved an expected 100,000,000,000,000,000,000,000,000,000,000,000 lives.鈥

Trying to reason with such small probabilities is itself . 鈥淧hysicists know that there is no point in writing a measurement up to 3 significant digits if your measurement device has only one-digit accuracy,” writes .

“Our ability to reason about events that are decades or more into the future is severely limited鈥. To the extent we can quantify existential risks in the far future, we can only say something like 鈥榚xtremely likely,鈥 鈥榩ossible,鈥 or 鈥榗an鈥檛 be ruled out.鈥 Assigning numbers to such qualitative assessments is an exercise in futility鈥. [I]f you are genuinely worried about long-term risk, I suggest you spend most of your time in the present. Try to think of short-term problems whose solutions can be verified, which might advance the long-term goal鈥. [T]o make actual progress on solving existential risk, the topic needs to move from philosophy books and blog discussions into empirical experiments and concrete measures.鈥

Rejoinders to criticism #3 as it applies to longtermism

To a large extent, the replies to such criticisms are calls not to toss out the baby with the more extreme implications of the longtermist bathwater. The premise that future people count rests on the that 鈥渢he long-term future matters more than we give currently give it credit,鈥 whether in our philanthropy or public policy. That message isn鈥檛 unique to EA. It ties in with the Indigenous practice of whereby one assesses current decisions on how they will affect persons born seven generations from now. And it underlies the 2024 UN .

Effective Altruism was a harbinger of protecting humanity from natural or bio-engineered pandemics or from malevolent Artificial Super Intelligence at a time when, apart from experts in public health and computer science, such things were seen primarily as the stuff of movies, such as “The Terminator.”

Moreover, to 鈥渞ely on the moral math [of longtermists] in order to think that human extinction is bad or that we are at a pivotal time in which technologies if left unregulated or unchanged could destroy us鈥.鈥 In terms of the technologies that pose such threats, EA was a harbinger of protecting humanity from natural or bio-engineered pandemics or from malevolent ASI at a time when, in public health and computer science, such things were seen primarily as the stuff of movies (e.g.: 1968; , 1971; , 1980; , 1984) or fixations alluded to by critics as of EA hand wringing. However, given the global experience of COVID-19 and the recent and efforts in the , at the and among the about , the concerns publicly raised by EA now seem more prescient than far-fetched.

In terms of deciding between the use of philanthropic resources to address tangible needs in the present as opposed to hypothesized needs in the future: such decisions and their dilemmas are not unique to EA. They operate, albeit with different intensities, across the philanthropic sector 鈥 going back to at least the 18th century as noted in part 1, and continuing now, for example, in debates around so-called . Indeed, versions of these debates exist the EA community and among its .

Criticism 4: This methodology 鈥 bolstered by its ethical assumptions and claims of impartiality 鈥 cultivates hubris, condescension toward and dismissal of contending priorities or sources of information, and the impulse to define and control philanthropic interventions on one鈥檚 own terms.

鈥淎ny honest reckoning over effective altruism now will need to recognize that the movement has been overconfident. The right antidote to that is not more math or more fancy philosophy. It鈥檚 deeper intellectual humility.鈥 , November 2022

鈥溾 there is still a strong element of elitist hubris, and technocratic fervor, in [EA鈥檚] universalistic and cocksure pronouncements鈥. [T]hey could benefit from integrating much more systemic humility, uncertainty, and democratic participation into their models of the world.鈥 , January 2023

鈥淭his is a movement that quant-focused intellectual and a distaste for people who are skeptical of suspending moral intuition and considerations of the real world鈥. This is a movement whose adherents 鈥 view rich people who individually donate money to the right portfolio of places as of the world. It’s almost like a professional-managerial class interpretation of Batman鈥.鈥 , December 2022

鈥淕etting some of the world鈥檚 richest white guys to care about the global poor? Fantastic. Convincing those same guys that they know best how to care for all of humanity? Lord help us.鈥 , November 2022

This second line of criticism against the analytical methods of EA focuses on the overconfidence they cultivate and how that can reinforce the methodological blindness outlined above. Such hubris takes on multiple but interconnected forms, all of which contribute to a strong if informal hierarchy within EA organizations and the community in general.

In Greek myth, Sisyphus was forced to roll a boulder endlessly up a hill because of his hubris.

The hubris can be intellectual. In part, this draws from the of effective altruists of whom 鈥11.8% have attended top 10 universities, 18.4% have attended top 25 ranked universities and 38% have attended top 100 ranked universities globally.鈥 And in part, it stems from their 鈥溾 and 鈥淸o]verly-numerical thinking [that] lends itself to homogeneity and hierarchy. This encourages undue deference and opaque/unaccountable power structures. EAs assume they are smarter/more rational than non-EAs, which allows 鈥 [them] to dismiss opposing views from outsiders even when they know far more than 鈥 [EAs] do. This generates more homogeneity, hierarchy, and insularity鈥 from the broader academic and practitioner communities. Such insularity involves 鈥減rioritising non-peer-reviewed publications by with 鈥. [T]hese works commonly don鈥檛 engage with areas of scholarship on the topics that they focus on, ignore work attempting to answer similar questions, nor consult with relevant experts, and in many instances use methods and/or come to conclusions that would be considered within the relevant fields.鈥

As a consequence, EA risks becoming 鈥溾 that perpetuates an 鈥溾 鈥 one that privileges 鈥渦tilitarianism, Rationalist-derived epistemics, liberal-technocratic philanthropy, Whig historiography, the [see part 2], and the Techno-Utopian Approach to existential risk. Moreover, 鈥渃ontradicting orthodox positions outright gets 鈥 [one] labelled as a 鈥榥on-value-aligned鈥 individual with 鈥榩oor epistemics,鈥 so 鈥 [one needs] to pretend to be extremely deferential and/or stupid and ask questions in such a way that critiques are raised without actually being stated.鈥

Such intellectual hubris reinforces forms of donor hubris. The used by EA leaders encourages those who support EA causes as 鈥渢he hero, the savvy consumer, and the virtuous self-improver.鈥 Such self-congratulatory images lead EA donors to respect and relate to each other, but not the objects of their philanthropy 鈥 particularly and the groups representing them. There鈥檚 little 鈥渆ffort to put the EA community in contact with activists, civil society groups, or NGOs based in poor countries,鈥 thereby cutting off the community from the insights and resources of those with lived experience, and curtailing the types of that could foster local buy-in and increase the chances of change lasting beyond the immediate philanthropic interventions. By not engaging with and applying grassroots knowledge and forgoing the strategies taken, for example, by , or 鈥 EA denies itself a possible means of doing more of 鈥渢he most good.鈥

And finally, the hubris and hierarchy of EA takes on managerial and governance forms.

As a , EA 鈥渋s deeply immature and myopic, 鈥 and 鈥 desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk averse. EA needs much stronger guardrails to prevent another figure like Bankman-Fried from emerging鈥.鈥

Indeed, the readiness of EA to align itself with SBF demonstrates the for EA decision making to 鈥渂e more decentralized,鈥 and points to a 鈥渓ack of effective governance鈥 that currently s “so top-down and so gullible.鈥 As it , 鈥淸o]ne has to wonder why so many people missed the warning signs鈥 鈥 particularly when, as noted in , those signs came as explicit sent by multiple parties to MacAskill and other 鈥淸l]eaders of the Effective Altruism movement 鈥 beginning in 2018 that Sam Bankman-Fried was unethical, duplicitous, and negligent in his role as CEO鈥.鈥 The decisions made by EA leaders to affiliate so closely with SBF underscore the need for structural reforms within EA organizations along the lines drafted in early 2022 by (see part 2) or drafted in early 2023 by the pseudonymous .

In part, EA鈥檚 managerial immaturity can be attributed to its rapid transition in little over a decade from comprising a few student-founded and student-run organizations that relied on the camaraderie and confidence of a small homogeneous group to becoming a range of diverse and well-funded philanthropic organizations that still have many of the original students at the helm (see part 2). Given that transition, EA as a movement could be subject to the limiting or destructive effects of 鈥溾 鈥 a condition identified in both for-profit and nonprofit organizations exhibiting the following traits, aspects of which have been observed by of the EA community.

Rejoinders to criticism #4

There are responses to the allegations concerning the intellectual, donor and governance hubris of EA. With respect to the intellectual forms 鈥 the ability of EA to attract smart and talented young people is and to its credit, not a flaw or weakness. The criticisms of intellectual hubris could have equally been directed to which 鈥渢ypically present themselves as more thorough-going, and more principled, champions of justice for the global poor than are their effective altruist opponents.鈥 Intellectual insularity and organizational orthodoxies affect many philanthropic or mission-based organizations and movements, whether on terms that are religious, political, ethnic or methodological. Such organizations can鈥檛 be all things to all people: their stands or actions have to be consistent with their identity and purpose. That said, within each there鈥檚 usually room for intellectual diversity. At least that鈥檚 the case for EA: witness the debates on the EA online . Besides, as noted by , 鈥渕ost EAs are reasonable, non-fanatical human beings, with a broad and mixed set of values like other human beings, who apply a broad sense of pluralism and moderation to much of what they do. My sense is that many EAs鈥 writings and statements are much more one-dimensional and 鈥渕aximizy鈥 than their actions.鈥 Italics are original.

With respect to forms of donor hubris 鈥 EA is in cultivating these. 鈥淢any argue that the traditional models and approaches we have for philanthropy are ones which put too much emphasis on the donor鈥檚 wishes and ability to choose, and give little or no recognition to the voices of recipients.鈥 More , 鈥渁 lot of charitable giving is about the hubris of the donor, rather than the needs of the recipient.鈥 If anything, EA is an exception by focusing on the needs not of donors but of recipients and by addressing only those needs over which it has competence. For , these relate to that, once addressed, 鈥渆mpower people to make locally-driven progress on other fronts.鈥 GiveWell selects across alternative interventions by assigning quantitative 鈥溾 to their good outcomes, where those weights reflect the priorities reported in a 2019 of persons living in extreme poverty in Kenya and Ghana. The weights, for example, place a higher value on saving lives as opposed to reducing poverty, and on averting deaths of children under five years old as opposed to older ones. Although seeking to act on the preferences of those with lived experience, GiveWell of 鈥渓etting locals drive philanthropic projects鈥, reasoning that local elites 鈥渨ho least need help will be best positioned to get involved with making the key decisions鈥.

William MacAskill

The need for reflection has been both identified by critics of EA and acknowledged by its leaders, such as William MacAskill, who said: “I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of. For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints.”

Finally, in terms of managerial hubris 鈥 the broad-brush criticisms, whether valid or not, are cast as if EA comprises one organization with a single founder, organizational chart or set of governance procedures. It doesn鈥檛. Instead it comprises multiple organizations 鈥 those under the auspices of (e.g., the , , and as introduced in part 2, in addition to others such as the and the ), as well as a range of research organizations (e.g., the , , , ) and foundations (e.g., ). Sure enough, certain individuals have longstanding and multiple ties with these organizations, and presumably have informal influence across several. , for example, co-founded Giving What We Can in 2009, and is a research fellow at the Future of Humanity Institute, and a trustee of both 80,000 Hours and the Centre for Effective Altruism. 鈥 as 鈥渁 co-founder the effective altruism movement,鈥 as well as its 鈥溾 鈥 co-founded Giving What We Can in 2009, 80,000 Hours in 2011, the Centre for Effective Altruism in 2012, as well as the Global Priorities Institute and the Forethought Foundation in 2017 of which he is the Director. But on the basis of these personal ties, it would be wrong to conclude that all of these diverse organizations exhibit the same management styles or governance structures, let alone that those styles and structures are somehow dysfunctional. Moreover, it would be wrong to infer such dysfunctionality from the decisions of MacAskill and others to affiliate with or tolerate SBF. As noted in , his alleged malfeasance went unrecognized by many investors and associates. And any rumours of his bad behaviour were not evidence of criminal behaviour.

Criticism #4 as it applies to longtermism

Critics see acute forms of hubris in EA鈥檚 formulation and defence of longtermist causes. They attribute the intellectual forms to the prevalence of academic philosophers among EA thought leaders (e.g., Bostrom, Greaves, MacAskill, Ord, and Singer). Philosophy is for its 鈥渢endency to slip from sense into seeming absurdity.鈥 No doubt 鈥, but they are philosophers, which means their entire job is to test out theories and frameworks for understanding the world, and try to sort through what those theories and frameworks imply. There are professional incentives to defend surprising or counterintuitive positions, to poke at widely held pieties and components of 鈥榗ommon sense morality,鈥 and to develop thought experiments that are memorable and powerful (and because of that, pretty weird).鈥 Parentheses are original. In other words, 鈥淸t]he philosophy-based contrarian culture [of EA] means participants are incentivized to produce [what at least would consider] 鈥榝ucking insane and bad鈥 ideas鈥.鈥 The types of reasoning and rhetoric that set out to be provocative may be the stuff of creative seminar discussions or fun dorm-room debates. But in their raw form, they hold little credibility beyond the inner clique of academics and the who want to be among them.

This intellectual hubris shaping and shaped by longtermism leads to multiple problems. First, it leads to inconsistencies if not misrepresentations in communication. In packaging their thinking and conclusions for a general audience, thought leaders deliberately the 鈥渕ore fanatical versions鈥 in order to widen the 鈥渁ppeal and credibility鈥 of longtermism. Second, intellectual insularity becomes 鈥溾 when a 鈥渄omain of high complexity and deep uncertainty, dealing with poorly-defined low-probability high-impact phenomena, sometimes covering extremely long timescales, with a huge amount of disagreement among both experts and stakeholders along theoretical, empirical, and normative lines. Ask any risk analyst, disaster researcher, foresight practitioner, or policy strategist: this is 鈥 where you maintain epistemic humility and cover all your bases鈥 by consulting and learning from research areas that EA typically ignores (e.g., studies in 鈥淰ulnerability and Resilience, Complex Adaptive Systems, Futures and Foresight, Decision-Making under Deep Uncertainty/Robust Decision-Making, Psychology and Neuroscience, Science and Technology Studies, and the Humanities and Social Sciences in general鈥).

, the 鈥減hilosophers鈥 increasing attempts to apply these kinds of thought experiments to real life 鈥 aided and abetted by the sudden burst of billions into EA, due in large part to figures like Bankman-Fried 鈥 has eroded the boundary between this kind of philosophizing and real-world decision-making鈥. EA made the mistake of trying to turn philosophers into the actual legislators of the future.鈥 Rephrasing this problem more : 鈥淸t]ying the study of a topic that fundamentally affects the whole of humanity to a niche belief system championed mainly by an unrepresentative, powerful minority of the world is undemocratic and philosophically tenuous.鈥 Or more : 鈥渋t does seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity鈥檚 future are moral philosophers and computer scientists.鈥 Perhaps convenient for them but if we rely on those philosophers and computer scientists to sort out the ways to safeguard the future in accord with humanity鈥檚 preferences, let alone purportedly capable of doing this.

With respect to donor hubris, critics see longtermism as catering to this at . 鈥淎s much as the effective altruist community prides itself on evidence, reason and morality, there鈥檚 more than a whiff of selective rigor here. The turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents鈥 ability to predict the future and shape it to their liking.鈥 Hence, it鈥檚 that 鈥渢he areas EA focuses on most intensely (the long-term future and existential risk, and especially AI risk within that) align remarkably well with the sorts of things tech billionaires are most concerned about: longtermism is the closest thing to 鈥榙oing sci-fi in real life鈥, existential catastrophes are one of the few ways in which wealthy people could come to harm, and AI is the threat most interesting to people who made their fortunes in computing.鈥 Parentheses are original.

Rejoinders to criticism #4 as it applies to longtermism

William MacAskill, 2018

William MacAskill

Not surprisingly, there are replies to the allegations of intellectual and donor hubris tied to longtermism. In terms of the , first note that 鈥淸i]t is appropriate for philosophers to speculate on hypothetical scenarios centuries into the future and wonder whether actions we take today could influence them.鈥 Second, 鈥渆ven longtermists don鈥檛 wake up every morning thinking about how to reduce the chance that something terrible happens in the year 1,000,000 AD by 0.001%. Instead, many longtermists care about particular risks because they believe these risks are likely in the near-term future鈥.鈥 , makes this point in arguing that the costs of protecting the future are 鈥渧ery small, or even nonexistent鈥 since most of the things 鈥 disaster preparedness, climate-change mitigation, scientific research 鈥 we want to do for ourselves for the near future. , 鈥淸t]his does not mean that thinking and preparing for longer-term risks is pointless. Maintaining seed banks, monitoring asteroids, researching pathogens, designing vaccine platforms, and working toward nuclear disarmament, are all essential activities that society should take. Whenever a new technology emerges, artificial intelligence included, it is crucial to consider how it can be misused or lead to unintended consequences.鈥

Third, as noted above, one should not judge longtermism by the extreme positions found within the rhetoric or reasoning of a 鈥減hilosophy-based contrarian culture.鈥 Indeed, so called 鈥渨eak longtermism鈥 鈥 the that the 鈥渓ong-term future matters more than we鈥檙e currently giving it credit for, and we should do more to help it鈥, climate change being a case in point 鈥 may indeed be its strongest, most persuasive and powerful form.

And fourth, by focusing on the bravado, some criticisms of longtermism verge on , and most overlook signs of humility. Consider, for example, MacAskill that he doesn鈥檛 know the answer to the question 鈥淗ow much should we in the present be willing to sacrifice for future generations?鈥 Or his that 鈥淸m]y No. 1 worry is: what if we鈥檙e focussed on entirely the wrong things? What if we鈥檙e just wrong? What if A.I. is just a distraction? 鈥 It鈥檚 very, very easy to be totally mistaken.鈥

With respect to donor hubris, billionaires from Silicon Valley 鈥 regardless of whether or where they practice philanthropy 鈥 aren鈥檛 known for their modesty. Moreover, longtermist forms of donor hubris don鈥檛 altogether dominate EA. Recent of the funding going to 鈥淕lobal Health鈥 are twice those going to 鈥淏iosecurity鈥 and 鈥淧otential Risks of AI鈥 combined. , 鈥淸m]any 鈥榣ongtermists鈥 have given generously to improve people鈥檚 lives worldwide, particularly in developing countries. For example, none of the top charities of GiveWell (an organization 鈥 in which many prominent longtermists are members) focus on hypothetical future risks. Instead, they all deal with current pressing issues, including malaria, childhood vaccinations, and extreme poverty. Overall, the effective altruism movement has done much to benefit currently living people.鈥 And it still does.

What can we take from the downfall of Samuel Bankman-Fried with regard to the analytical methods of Effective Altruism?

SBF expressed great confidence in financing selective interventions to reduce the existential risk posed by pandemics and ASI, and in ranking them solely on the basis of expected value calculations, regardless of the odds. And although he supported political campaigns, he did so to promote not institutional change but rather EA and an unregulated crypto industry. Are he and the adulation he received the exceptions that prove the general rule that the analytical methods of EA are sound and unencumbered by the quantification they require, the hubris they encourage or the deflection from systemic conditions they justify? Or is he the example that demonstrates those methods are defective on those terms? Or is he neither?

Regardless of how one answers such questions, the bankruptcy of FTX International and the criminal charges brought against SBF in late 2022 enlivened the existing criticisms of EA鈥檚 analytical methods. Many of us affiliated with the philanthropic sector might see the criticisms as being relevant only to EA and its approach to philanthropy and having little to do with the sector as a whole. But is that the case? Here I select six areas in which the criticisms might have implications beyond EA.

1. What data would allow the sector, your organization or you as a donor to become better at recognizing societal needs and addressing them? Do we have the skills 鈥 or even the willingness to acquire the skills 鈥 needed to interpret and apply those data?

EA has been faulted for its 鈥渜uantitative culture鈥 and 鈥渙verly-numerical thinking鈥.聽 But could such a culture and thinking empower the philanthropic and charitable sector by strengthening its abilities to recognize societal needs and address them more effectively? Sector leaders in Canada think so 鈥 placing among their top priorities the need to acquire more data for and about the sector, and calling for the sector to 鈥grow up鈥 in terms of investing in the technology, the talent, and the delivery and evaluative processes that would allow it to learn from and apply those data. Rather than disparage EA鈥檚 quantitative methods and talents, could we learn from and make use of them?

2. Can catering to the needs of donors 鈥 or, indeed, your own needs as a donor 鈥 impede the effectiveness of philanthropy? If so, then how can this be avoided or overcome?

Understandably, donors need to recognize their own priorities in the mission and accomplishments of the organizations to which they give. And undoubtedly, EA鈥檚 track record in managing donor relations has been far from perfect. However, its starting point has been the needs of beneficiaries: first identifying the particular causes and interventions that would do the most good for them with a given amount of resources, and then recruiting the donors who wish to support this venture. As noted in part 2, those causes and interventions are ranked according to their cost effectiveness 鈥 not their donor appeal per se. Many of us work in or with charitable organizations with given missions and sets of beneficiaries. But are their ways either to manoeuvre within a mission or amend it that would increase the cost effectiveness of the work you do? If not, are their other organizations with missions and beneficiaries that would better match your priorities?

3. What’s the risk tolerance of your organization or for your charitable giving? Do you have the resources and opportunities to take greater risks 鈥 ones that could open up ways to make greater change or at least learn how to do so? If not, then what other things could enable you to make your philanthropy more effective?

As noted above, the philanthropic and charitable sector has been faulted for being too staid and risk adverse, preferring safe but modest ventures that limit what the sector can achieve. How can we learn from the EA community about tolerating greater risk without falling into the recklessness evinced by SBF? What opportunities would open up if we moved in that direction? What would be the personal or organizational costs of doing so? How could those costs be overcome?

4. Are all charitable purposes equal in their potential to create social benefit where it鈥檚 most needed? If so, why is that the case? If not, then what changes could allow the sector and the donations it receives to become more beneficial?

As noted, EA uses quantitative methods to rank the cost effectiveness of not only alternative interventions within a cause, but also the causes themselves, regardless of where or when the beneficiaries are located 鈥 relegating those that are less cost effective to the category of . A given donor might prefer giving to an opera company over a local food bank, a local food bank over the , and the Ice Bucket Challenge over a encouraging childhood vaccination against malaria in sub-Saharan Africa. In Canada, only the latter doesn鈥檛 constitute charitable giving because the organization isn鈥檛 registered here. According to GiveWell, however, only the latter doesn鈥檛 constitute luxury spending based on cost effectiveness. provide higher or lower tax credits or deductions according to those causes believed to generate higher or lower social benefit. , although sharing Canada鈥檚 common law tradition (e.g., Australia, India, Singapore), deny any tax incentives for giving to places of worship. What鈥檚 your take on this?

5. How can young but maturing charitable or nonprofit organizations remain inspired by the vision and dedication of their founders but nevertheless be able to adapt, plan and decide in part by drawing upon the expertise and input of others whose views differ from or challenge those of the founders?

To endure and adapt, organizations need to become more than extensions of their founders鈥 original vision and initiative. Such growth is not an easy or straightforward process either for the founders whose identity may tied to the work of the organizations, or for the organizations and their stakeholders that have become accustomed to simply trusting and following the personal decisions or priorities of the founders. Whether or not EA organizations are subject to so-called 鈥渇ounder鈥檚 syndrome鈥 鈥 there is still a value in those organizations and others establishing decision-making procedures that are transparent and consultative, and avoiding founder or funder burnout. Have the organizations you have worked in or with been able to mature on those terms? If so, has anything been lost in the process? If not, what or who is impeding this?

6. Do the needs of future generations explicitly and regularly fit into the mission and work of your organization or the priorities that guide the directions and amounts of your charitable giving? If not, then how do you justify not taking those needs into account 鈥 beyond claiming there are already too many needs in the present day?

How should the philanthropic sector divide its resources across the needs of current and future generations? How does or should this differ from the responsibilities of government?

Conclusion

Samuel Bankman-Fried

Samuel Bankman-Fried

The downfall of Samuel Bankman-Fried has renewed calls for Effective Altruism to reconsider the analytical methods it uses to rank philanthropic causes and interventions. To my mind, the criticisms around quantification, hubris and systemic conditions 鈥 and their rejoinders 鈥 raise issues and questions relevant to the philanthropic sector as a whole.

In part 4 of this series, I鈥檝e summarized those criticisms and their rejoinders and have posed several related questions for those of us working in or with the sector. My intent here, as with part 3 and the forthcoming part 5, isn鈥檛 to fault or exonerate EA. Instead, it鈥檚 to point out that the issues on which EA is or should be reflecting 鈥 particularly after the downfall of SBF 鈥 are ones that could help more of us across the sector to reconsider and perhaps revise our own analytical methods and skills in our shared hope of being better able to benefit current and future generations.

Banner photo is courtesy of Erik Mclean.

Monday, September 11, 2023 in , ,
Share: ,