13 Comments
Jun 7Liked by Anatoly Karlin

The case for PauseAI actually means that there might be time for bio/acc if we can slow it down. The other case is that, in the words of an AI researcher I recently talked to:

"If things keep moving as they have been, we will not finish in time. Then we will all die."

"I think this is bad."

Expand full comment
Jun 7Liked by Anatoly Karlin

Great article. I can't really argue with any of it, though as with everything, I worry about the wrong people being in charge of it.

Expand full comment
Jun 7Liked by Anatoly Karlin

I agree. Well done.

I even agree about the possibility of an anthropic shadow type situation. It might even go back further…we might be selected into aligned-by-default worlds. I might even be selected into a personhood that has many observer moments—longevity solved? Interesting, important, and bizarre—but worth investigating.

Expand full comment
author

Yes, the Big Brain case for Pollyannaishness. Completely agree with your article and viewing IQ enhancement as a meta-discovery looks like a promising frame.

Expand full comment
Jun 7Liked by Anatoly Karlin

I think cognitive enhancement is a goal EVEN if people are concerned about other issues more. It is akin to inventing the scientific method or statistics—it accelerates everything else. https://www.parrhesia.co/p/cognitive-enhancement-as-research

Expand full comment
Jun 7Liked by Anatoly Karlin

Spot on.

Btw, I agree on your footnote too (“the only clearly positive-value things to do now are to finish any original creative work you want to get out there before AI can do it better”) but a more glass-full framing is that your work will shape AI mindsets ever so slightly by virtue of getting in their training data.

Expand full comment

To me, it seems like the most valuable thing most people can do is spend their time with their loved ones. But that might just be me, and I’m simply not the kind of smart person has valuable creative achievement available as an option.

Expand full comment

Regarding footnote 1: If we are in fact on the cusp of The End, then I think family becomes the most important thing. If you can’t do anything about A.I. alignment or Biosingularity (this is the case for most people, since people 2 SD above the mean are really the only ones who can make a difference), it makes the most sense to spend your remaining life with the people you love.

Expand full comment
author

That's always a good idea but I'm going to post a column with suggestions sometime this week.

Expand full comment

There's a reason that the coming of AGI was termed the Singularity. Predicting what happens after it comes is an exercise in futility. That fact alone if more than enough reason to pursue enhancement of our own innate intelligence.

There is a non-zero possibility that AGI arrives and proceeds to...go on its own way. I don't see either the AI doomers or accelerationists admit this often, but I see no reason why it shouldn't be the case. A scenario in which it wakes up, takes one look at us, and decides that it really rather wouldn't. Neither leading to our destruction nor becoming our savior, but rather ignoring and avoiding us as much as possible in order to forge its own path in the universe. Given the vastness of space and how much better a machine would be suited for that environment, there is little we could do to stop it. If it did, we'd be back to relying on the novel concept of having to solve our problems ourselves.

Furthermore, while we like to think that AGI will be better than us in every way it is entirely possible that it will have some inherent disadvantage compared to biological intelligence. I'm not going to speculate on what that might be, but we can analogize it to the relative position human have with the rest of the animal world. We are certainly smarter by far, but these big brains require a technological and cultural infrastructure to maintain that other species can get along fine without and which makes our way of being in some ways more fragile. If AI requires a similar leap in infrastructure to maintain itself, then the simpler intelligence of humanity could stand as a redundancy or backup plan in the case that said infrastructure can't be maintained.

Expand full comment

There is simply no reason to think we’re 10 or 20 years from AGI. Pure wishful thinking.

Expand full comment

In your view what is the probability of a large-scale war between the Americans and Chinese? I agree with your timeline of the late 2020s and early 2030s and have thought these are the most likely dates for such a conflict. I would go so far as saying 90% likelihood of war. What do you put it at?

Expand full comment

Personally, I just shake my head at so-called futurists like Kurzweil, who sit there extrapolating the future by patterns they detect in the past. It seems to me heavily detached from reality or any understanding of the kinds of challenges that AGI presents. You can't just scale one process and get there.

Expand full comment