September 20, 2024


Tand years ago, the Oxford philosopher Nick Bostrom published Superintelligence, a book that explores how superintelligent machines might be created and what the implications of such technology might be. One was that such a machine, if created, would be difficult to control and even take over the world to achieve its goals (which in Bostrom’s celebrated thought experiment was making paper clips).

The book was a big seller, sparking lively debate but also a great deal of disagreement. Critics complained that it was based on a simplistic view of “intelligence”, that it overestimated the likelihood of superintelligent machines emerging soon, and that it failed to suggest plausible solutions to the problems it created. But it had the great merit of getting people to think about a possibility that had hitherto been confined to the remote fringes of academia and science fiction.

Now, 10 years later, comes another shot at the same target. This time, however, it is not a book, but a substantial (165 page) essay with the title Situational awareness: the decade ahead. The author is a young German boy, Leopold Aschenbrenner, who now lives in San Francisco and hangs out with the more cerebral fringe of the Silicon Valley crowd. On paper he sounds a bit like a prodigy in the Sam Bankman-Fried mold – a math whiz who graduated from an elite American university in his teens, spent some time in Oxford with the Future of Humanity Institute crowd, and worked for OpenAI’s “superalignment” team (now disbanded), before starting an investment company focused on AGI (artificial general intelligence) with funding from the Collison brothers, Patrick and John – founders of Stripe – some sharp cookies who don’t support losers.

So this Aschenbrenner is smart, but he also has skin in the game. The second point may be relevant because essentially the thrust of his mega-essay is that superintelligence is coming (with AGI as a stepping stone) and the world is not ready for it.

The essay has five sections. The first outlines the path from GPT-4 (where we are now) to AGI (which he thinks could arrive as early as 2027). The second follows the hypothetical path from AGI to actual superintelligence. The third discusses four “challenges” that superintelligent machines will pose to the world. Section four outlines what he calls the “project” needed to manage a world equipped with (dominated by?) superintelligent machines. Section five is Aschenbrenner’s message to humanity in the form of three “principles” of “AGI realism”.

In his view of how AI will progress in the near future, Aschenbrenner is basically an optimistic determinist, in the sense that he extrapolates the recent past on the assumption that trends will continue. He cannot see an upward-sloping graph without expanding it. He grades LLMs (major language models) according to ability. GPT-2 was therefore “pre-school” level; GPT-3 was “primary school student”; GPT-4 is ‘smart high school student’ and a massive increase in computing power will apparently lead us by 2028 to “model as smart as PhDs or experts who can work as collaborators besides us”. En passant, why do AI boosters always see doctorates as the epitome of human perfection?

After 2028 comes the really big leap: from AGI to superintelligence. In Aschenbrenner’s universe, AI does not stop at human ability. “Hundreds of millions of AGIs could automate AI research and compress a decade of algorithmic progress into one year. We would quickly go from human-level to extremely superhuman AI systems. The power – and the danger – of superintelligence would be dramatic.”

skip past newsletter promotion

The essay’s third section contains an exploration of what such a world might be like by focusing on four aspects of it: the unimaginable (and environmentally disastrous) computational requirements needed to manage it; the difficulties of maintaining AI lab security in such a world; the problem of aligning machines with human purposes (difficult but not impossible, Aschenbrenner believes); and the military consequences of a world of superintelligent machines.

It is only when he gets to the fourth of these topics that Aschenbrenner’s analysis really begins to fall apart at the themes. Running through his thinking, like the message in a stick of Blackpool rock, is the analogy of nuclear weapons. He considers the US to be at the stage with AI it was after J Robert Oppenheimer’s initial Trinity test in New Mexico – before the USSR, but not for long. And in this metaphor, of course, China plays the role of the Soviet Empire.

Suddenly, superintelligence turned from a problem for humanity into an urgent matter of American national security. “The US has an edge,” he writes. “We just have to keep it. And we are now solving it. Above all, we need to shut down the AI ​​labs quickly and radically, before we leak important AGI breakthroughs in the next 12-24 months… We need to build the computer clusters in the US, not in dictatorships offering money. And yes, US AI labs have a duty to cooperate with the intelligence community and the military. America’s lead on AGI won’t ensure peace and freedom by just building the best AI girlfriend apps. It’s not pretty – but we need to build AI for US defense.”

All that is needed is a new one Manhattan Project. And an AGI industrial complex.

What I have read

Despot shot
In the former Eastern Bloc, they fear a Trump presidency is an interesting piece in the New Republic about people who know a thing or two about living under tyranny.

Normandy revisited
The historian Adam Tooze D-Day 80 years later: World War II and the ‘Great Acceleration’ is a reflection on the war commemoration.

Legal impediment
Monopoly Round-Up: The Harvey Weinstein of Antitrust is Matt Stoller’s blog post about Joshua Wright, the attorney who had a devastating impact on antitrust enforcement in the US over many years.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *