One criticism I often see levied against bio/acc - biological acceleration - goes along the lines of, “If AI timelines short, then what does it even matter?” We build AGI sometime in the late 2020s-early 2030s, it rapidly scales to ASI, and then everything ends - ascension, death, or simulation. Biology is “slow”. Inherently, because chemical reactions are slower than electrons; epistemically, because biological systems are extremely convoluted, and progress is stymied by overarching regulatory frameworks and bioconservative values that load on disgust reflexes that don’t extend to cold silicon. There’s some amusing chance that semaglutide turns out to be the last truly “manmade” biotech breakthrough.
The trite response to this is, why do anything at all? Despite the implied apathy bordering on nihilism, this is not an irrelevant question. The shorter AI timelines are, the more logical it becomes to lie back and enjoy the ride, with traditional considerations around family, legacy, and vocation diminishing to nothing1. However, the future isn’t programmed, and as we enter the cusp of the AGI era, there have never been more opportunities for smart, enterprising individuals to exert influence over future history.
Consider a scenario in which short timelines do not pan out. The most obvious way that could happen if is that the techno-optimists are simply wrong. The current dominant informed view seems to be that either continued LLM scaling is sufficient to get us to AGI by the late 2020s on its own, or alternately, that the flood of technocapital the AI boom has unleashed will ensure that AGI will be reached through complementary innovations sometime during the 2030s. This would be my own main bet. But at the end of the day, they are all based on extrapolating straight lines on a log chart. They don’t have to continue all the way to the Singularity. And if we don’t wake the Balrog this decade, there’s a good chance that AGI will only happen much later, if at all2.
Alternately, it is possible that certain events transpire which push back AI timelines by a lot. The immediate candidate that comes to mind is a Taiwan War - it is a truly amazing historical contingency that the bottleneck on world manufacturing of advanced chips lies not just on a tectonic, but a geopolitical faultline - and the main one that straddles both of the world’s two superpowers at that. By a further coincidence, projected timelines on the likeliest date for a Chinese attempt to coerce Taiwan back into the fold cluster around the late 2020s to early 2030s, coinciding precisely with AI timelines3. There is a concept floating around in the rationality sphere called the “anthropic shadow”, which is the idea that the likelihood of world-ending catastrophes is influenced by the numbers of future observers4. Speculatively, worlds that get snuffed out early don’t have many future observers to remember or simulate them. Consequently, to the extent that the anthropic shadow has a Chekhov’s gun, it must be Taiwan.
World War III will wreck globalization, cut the legs out from the technocapital machine, and likely set back AGI by many years. If the war escalates into a full-scale nuclear exchange between the US and China, the latter of which is rapidly scaling up its arsenal, the delay could be in the order of two or three decades as much of both the material and human capital base underlying AI progress is destroyed. However, this is not the only avenue through which a “Big Stretch” could happen. There are AI safetyist organizations such as Pause AI that are agitating for a pause or halt to the training of advanced frontier models. They are highly unlikely to succeed under current political configurations, since both technocapital and geopolitical AI race dynamics militate against it. However, in the event that an AI catastrophe kills a lot of people, but at a sufficiently early stage that it doesn’t acquire runaway characteristics before getting shut down, then it’s easy to imagine an extremely heavy-handed response in the form of a global shutdown of further AI research along the lines of the more hawkish proposals that you see at LessWrong.
All of these disparate scenarios might not be individually likely, but in aggregate, one or a combination of them aren’t exactly “out there”.
In my article on The Biosingularity is Near, I pointed out that increasing the world’s stock of elite mindpower - apart from its intrinsic personal and social benefits - can accelerate solutions to many other existentially important problems, including AI alignment. However, ideally we also want to optimize our own biology. We want life extension, because involuntary death is a bad thing, and also because much longer lifespans would increase future time orientation; immortals would not want to Leeroy Jenkins into dangerous and unproven ASI. We want to counteract dysgenic trends in reproduction to avoid programmed idiocracy if AGI proves too hard or the path to it is blocked by global regulations. We want to solve the qualia problem because we should really want to know if AIs can ever truly be our “mind children” or if they would forever remain p-zombies, clockwork constructs with no inner worlds of awe and wonder. And I would argue that above all we want to accelerate human intelligence enhancement, because it is the one thing that can help us solve all the other problems much faster.
Nvidia is on the cusp of flipping Microsoft and Apple as the world’s largest company, and trillions of US dollars are going to be allocated to GPU farms in the next decade. There are hundreds of thousands of AI scientists, and at this point, many hundreds of alignment researchers. They are supported by a vigorous global community of activists ranging the gamut from e/acc to AI doomers who are actively discussing and debating the future of AI. For the record, I am agnostic towards this enterprise - although I am “woke” to the reality and likely imminence of the AI threat, I recognize that the political environment isn’t malleable to pause; that any government regulations to this effect are likely to be ineffective, where they are not actively counter-productive; and that there is some value to more crisply establishing precisely where on the OOM scale the AGI transition begins (we probably need to know approximately where that limit is if we are to actually ever attempt a serious long-term AI control regime). Finally, it’s unclear whether the world will even use an AI pause productively5, while torpedoing it too early will rob the world of a great deal of progress - especially in biotech! - that could have been accomplished in relative safety.
Consequently, so far as potential utilitarian impact goes, I think there is a case to be made that working on bioacceleration - especially nooceleration (human intelligence enhancement) - is genuinely higher “leverage” than most anything a random smart individual can do in the AI sphere, and that’s despite biology’s inherent slowness and possibly profound irrelevance if the short timelines pan out. It’s a matter of scale. There are no more than 2,000 people in the world working on life extension, and the monetary sums are in the billions, not the trillions. In intelligence enhancement, it’s 20-40 people and a few stealth startups measuring in the tens of millions. Its activists/bloggers can be measured on one’s fingers6. Very few people are working on this even as many of its enabling technologies are approaching the cusp of maturation, with the 2020s possibly being to nooceleration what in retrospect the 2010s were to AI.
I wish the AI people the best of luck in navigating the very complex and existential issues that creating a wish-granting genie that doubles as a new apex predator life form involves. I don’t think there are truly any “baddies” in the space because many of the arguments from both sides are legitimate7. They collectively carry a heavy responsibility for the fate of humanity and, possibly, conscious intelligent life in the cosmos. By the same token, there is extreme untapped value in insuring against a world in which AI doesn’t happen or is substantively delayed - especially if it happens in a cataclysmic manner, such as a nuclear war - but where bioacceleration fails to take off. In such a world, human intelligence enhancement remains marginal in impact and niche in demographic uptake, and is unable to positively impact on life extension, reversing dysgenics, or enabling all the other cool cyberpunk things from cryonics to making Deus Ex cyborgs real in the event that current baseline humans prove too dumb to make real progress on them. Consequently, we might view bioacceleration in general, and nooceleration in particular, as an insurance policy against AI going very awry. It would be extremely disappointing, especially in light of the relative advantages that bio/acc has over AI - namely, stronger interconnections with the decentralized cryptocurrency world and greater geographical distribution (with the pop-up city of Vitalia in Próspera on the island of Roatán, Honduras looking set to become something of a global hub for it).
We would be back to square one, and there are no guarantees that we would even be better positioned to solve AI alignment, if and when it becomes relevant again. So let’s try not to drop the ball on the second greatest technological story of the early 21st century. And hey - Próspera is warmer and a lot less likely to get nuked than San Francisco.
My own personal take on this, which I have been repeating on X ever since the AI timelines collapse, is that the only clearly positive-value things to do now are to finish any original creative work you want to get out there before AI can do it better and erases your chance to make a unique footprint on the lightcone, and to accumulate Ethereum as the most rigorous framework for property rights in the digital realm.
Leopold Aschenbrenner on Situational Awareness, “Racing through the OOMs: It’s this decade or bust” section.
And by extension, ASI, since millions of AGIs can produce a millennium’s worth of human AI research within a year.
As Roko Mijic pointed out, it’s not like MIRI or the FHI used the 2010s productively.
For instance, Sam Altman has become this season’s main villain. But would a villain have wasted time on Worldcoin - the most rigorous Proof of Humanity system to date, and a legitimate enabler of the global UBI that would fast become an existential necessity in the event that AGI makes human labor redundant?
The case for PauseAI actually means that there might be time for bio/acc if we can slow it down. The other case is that, in the words of an AI researcher I recently talked to:
"If things keep moving as they have been, we will not finish in time. Then we will all die."
"I think this is bad."
Great article. I can't really argue with any of it, though as with everything, I worry about the wrong people being in charge of it.