My AGI Bullet Points

I’ve spent way too much time indulging in podcasts/talks/etc. about AGI apocalypse scenarios.

Here are my high level takeaways. I’m posting this as a benchmark to revisit in the future, to see if my views have changed.


In April 2023, I believe:

  • Human civilization is on a path to de facto extinction independent of AGI - nuclear war, bioterror, etc.

  • Unaligned AI is an existential risk, but it’s on the same plane as those other threats.

  • Regulation is futile because AGI can be developed in private by state and non-state actors.

  • In the medium-term (<20 yrs), labor market impact is a much bigger concern than alignment.

  • AGI is probably a misnomer; intelligence is multifaceted.

  • Consciousness (and its molecular basis) is more interesting and consequential than AGI. The answer - if we discover it - may be highly distressing.