The Effective Altruism Movement + EA Global Conference, Boston

Effective Altruism (EA) is a movement. I was at their conference in Boston last month, and needless to say, I was moved. Here’s my take.

Effective Altruism is literally what it says it is. It is doing good, as efficiently as possible. It’s optimizing philanthropy. It’s doing good, better. They would be hypocrites if they didn’t come up with an efficient name. So, for instance, if you’re an engineer by trait, and you quit your job to go to Vietnam to teach English, you’re essentially wasting your stellar engineering skill-set and not being effective. At that point, if you were an Effective Altruist, you would seriously question your decision – am I really making the most impact possible by teaching English versus let’s say, engineering a way to build huts more efficiently?

So, how is this a movement? In my opinion, it is a movement because it urges you to break conventional thinking. Should you donate to a charity based on your emotions (I love dolphins and they are being killed)? Or should your dollar go where it makes the most impact (deworming does not sound fun, but might be a more effective use of your money)? It is a theory, a state, a science, a way of thinking that encourages demands that you give back in the most scientific, data-driven, efficient way possible – whether it is through donations or your skill-set. As a quant guy, this resonates with me wholeheartedly. Why would you not want to make the most impact possible? But, if you’ve lost your little sister to cancer, it becomes a lot more difficult to be rational about where you want your donations to go. And, right about there, this theory gets a little extreme. It calls for “cause neutrality” and urges you to drop emotion out of the equation altogether. So, how much ever you are craving to prevent other little sisters around the world from inducing cancer, EA tells you not to give to cancer research because it is over-funded, so every additional dollar you give to cancer has lower marginal utility. Instead, you should donate to Malaria. And at this point, you’re probably thinking to yourself, HELL NO.

But, that’s the point of this school of thought, this science, this theory. It has to be extreme. It cannot make any compromises, any emotional exceptions. That’s how other schools of thoughts are as well – socialism, capitalism, democracy, communism – all these ideals, in their purest form, are extreme. And that’s okay from a theoretical perspective. But, as humans, anything in extreme amounts is unhealthy. So, the most productive way forward is by taking the best from all these theories, and not ever treading on the extreme. Yes, I might have a gone a little too far comparing EA to communism, but before you misconstrue, it was to make a point.

A movement must also have a conference. Enter EA Global – Boston. Not going to lie, I was as excited as a guinea pig for this one. It spanned two days at the Harvard campus where two hundred or so eager, effective altruists convened to listen to what’s next in the social impact world that is game-changing. Now picture this, the folks at this conference are at the confluence of two separate MO’s – on one hand we care about social impact, and we are a passionate bunch. On the other hand, we are also quant-nerds drowning are emotions in rationality. Combine these two divergent traits, and what you get are passionate social impact enthusiasts looking for a way to quantify everything. And honestly, it is this combination that makes the atmosphere electric, if that is your cup of tea. If not, then I don’t know how you made it this far in this post. The people I met there were highly intelligent and highly motivated, who cared to solve the worlds problem in the most rational way possible, and that is beautiful. The other thing that took me by storm was that even though this was social-impact-centric, the talks and things we were discussing were all game-changing technologies – AI, blockchain, genome editing, food-creation. That’s the crux – everything that matters has social impact tied to it – the sustenance of the friggin’ human race. And it wasn’t all about how all these breakthroughs are going to make the world a better place, but also around the real dangers associated with it – singularity, nuclear war, using data deceptively and so on. A girl next to me asked me what my story was, and the hero that I am, I gave her my shmeal, chummed about my prospects. And then, I asked her what brought her there and pretty casually, she said “I work in a lab where I grow meat, so we can stop animal cruelty and still enjoy a piece of steak.” Grow WHAT? She also said that she has succeeded and she has tried it, and all I could say was – “Is it gross?” She chuckled and said it wasn’t – in fact she trusted it more because she knew where it came from. And basically, she said we were pretty close to making real-fake meat sustainable. I was in love. These are the kind of conversations I left with.

Most of the talks were thoroughly engaging (there were a couple I just couldn’t follow and a couple that umm, uh yeah, let’s just leave it at that). The two that really stood out for me were:

  • Max Tegmark’s take on existential risk and existential hope: He gave an eye-opening perspective on how nuclear war will not be an intentional act, it will just happen by mistake.

 

  • Vikash Mansinghka’s breakdown of AI-assisted data analysis: This one was a little personal for me because I thought I understood data, because that’s what I did for a living. Not quite.

It was honestly an extremely enriching experience – I was surrounded by brilliant people, and it left me just wanting to read more and learn more and grow more. If any of this resonates with you, do give William Macaskill’s “Doing Good Better” a read – he does an entertaining job of explaining what effective altruism is all about. And if that hits home, go to one of these.

P.S. It was eerie to actually be in the presence of a couple of people whose books I had read and actually strongly believe in. I did a great job turning off fanboy mode though – got to keep it cool.