Post-AGI promises
Promises usually assume there won't be a world-changing event.
When I was in kindergarten, I got engaged to another kid. As I got older, I never explicitly broke off the engagement, but I still consider it to be void. I’m not breaking a promise by not getting married to whoever they grew up to be.
If humanity survives human-level AI, then people will change so much that they will likely look back at their 2025 selves as I look back at my 4-year-old self.
Many contracts, promises, and pacts of today will be broken in the coming decades. But also, the coming economic irrelevance of humans will also create a sudden rush to create promises on a scale that has never been seen before.
I. The individual scale
Many major decisions we make in life are some sort of promise. Accepting a job offer implies continued employment. Getting married often implies building a family together. While some promises are “for better or for worse”, there is still an implicit assumption that superintelligent AI will not be created sometime soon.
Over the last few years, I’ve started to hedge when making major decisions. I got the offer for my current workplace a few months before I could go full-time. Instead of saying “Yes, I accept the offer!”, I said “In a normal world, I’d enthusiastically accept this offer, but things are changing so fast that I estimate there’s only around an 80% chance that I actually start this job full-time in June.” I was fortunate enough that the people who received that message are reasonable and patient, and didn’t find this insulting.
Similarly, in my personal relationships, I try not to make promises for what I’ll do in the post-AGI future. I wouldn’t want my 4-year-old self dictating my adult life, and I don’t want my pre-AGI self dictating my post-AGI life.
II. The national scale
There will be a larger breaking of promises. Governments do many things, including attempting to provide good lives for their citizens. Luckily, governments are largely dependent on the citizenry staying alive and productive, meaning that their incentives are somewhat aligned.
But pretty soon, humans will be a retired class, no longer able to contribute economically to the power of their respective governments. As such, governments will be less incentivized to provide good lives for them.
Maybe people will be caught off-guard by their economic disempowerment. Maybe they’ll see it coming and attempt to pre-empt their disempowerment by setting up a UBI — a promise from a government to its people that it will take care of them even when the people aren’t contributing to the economy anymore.
This is different from any previous institution in that it will effectively be a permanent one. We’ll need to create an institution that can last for billions of years. Plausibly, a temporary intervention (UBI) coupled with an easy-to-continue institution (property rights) will do the trick.
III. The civilizational scale
Zooming out even further, humanity might change a lot over the coming decades. It’s likely that the majority of the intelligent world population won’t even have directly descended from humans by the end of the century.
Humanity is associated with many hopes and dreams. Making the world better. Sharing wonderful moments. Protecting our loved ones. Discovering the unknown. Will these dreams still remain when Earth-originating civilization navigates the stars?
Maybe all of it will disappear due to a misaligned superintelligence dictating the future. But even without a misaligned superintelligence, some things will be lost in the process. Maybe once the sensory experience of humans is vastly expanded, we will throw aside the pleasures of today for much more intense versions of them. A beautiful painting, a tasty meal, and a loving embrace might all be replaced with other experiences.
Some might find solace in enclaves — relatively small communities of humans who are dedicated to preserving the ways of the past. But these groups will necessarily be outnumbered due to simple physical constraints, unless they have enormous voting power over the future.
In some way, AI safety is the science of making sure future Earth-originating civilization fulfills the dreams of our current selves.
