7 Comments
User's avatar
david w's avatar

i agree with many of the ideas in EA here but a few critiques about the things at the end:

on the 1600s thought experiment:

sure, in the 1600s there wrong models about many things, but that's an argument for improving our models, not for abandoning systematic and quantitative thinking about impact. the alternative in pure technooptimism risks ignoring tractable near-term suffering for speculative long-term gains.

on the Kant/utility split: I agree neither is perfect (kant justifies telling the truth about where someone lives to a murderer while util justifies slavery, nuc war outweighs, and s-risks outweigh), but ur proposed rules seem incredibly arbitrary. why is donating to charity "impersonal" but lying to get money to donate isn't? both affect both close and distant others. plus, what meta ethics (naturalism?) are these rules derived from? plus, the is-ought gap means that there is no such thing as universal ethics/morality or naturalism.

Zach Chen's avatar

Good points. I agree with what you're saying about the 1600s thought experiment. What I was trying to highlight with that specific example is that by advancing science / tech we usually always end up finding better solutions.

EA optimizes for the local optima, which for the 21st century, is a pretty good response.

But what I perhaps am trying to show is that advancing tech/science might be the best way to get out of the local optima into the global optima. And I agree that EA has done a good job with research and finding the best concrete things to do in the current day.

And yes, I agree we need to solve short term tractable issues and also make sure we solve existential risks as tech gets better (like misaligned AI).

I def could have had more nuance when it came to the decision rules for Kant/Util but I think this is also very hard to put concretely. Morality in general seems to be difficult to know when to do what since you can always change the situation slightly and you don't know what specific point you ought to go by kant/util. That's why I said it might be up to personal interpretation or maybe even you could argue for determinism (biology, neuroscience) that determines one's empathy + upbringing.

I think there is clear difference though in donating to charity versus lying to get money. No one will bash on you for donating to an effective charity but plenty of people will lose trust if you lie and gain that reputation. So the long term utility changes dramatically if you lie to get money (as was with SBF and the resulting affects on EA's reputation). And it's much less impersonal since your reputation is tied to you and future actions you take directly.

And yeah I think morality is really gray so I don't know if there is universal ethics but it looks like humanity is improving our ethics overtime and maybe that converges to something.

radha's avatar

expresses a lot of my mixed opinions on EA, great post

Zach Chen's avatar

Thanks! I have mixed feelings about EA but I generally support it. I think the vast majority of people haven't thought enough about calculation based welfare and applied it to their decisions and career.

It's hard because anything we try to calculate into the future becomes almost impossible. And at that point what heuristic do we use to create change?

At some point too much thinking about what is the optimal strategy becomes a negative force and so finding a heuristic and then applying the impersonal research EA has done I think is the best.

radha's avatar

agreed - i think a lot of people have a desire to make a positive impact on the world but don’t necessarily end up acting upon them. i think we first need a commitment to pursuing that desire, and then balance the overcalculating because, like you said, it starts to play a negative role after a certain threshold

Sophia Zhang's avatar

big fan of this post

Zach Chen's avatar

haha thanks sophia