Pawtucket Times

What the AI ‘extinction’ warning gets wrong

Tyler Cowen Bloomberg Opinion Tyler Cowen is a Bloomberg Opinion columnist. He is a professor of economics at George Mason University and writes for the blog Marginal Revolution.

Sometimes publicity stunts backfire. A case in point may be the one-sentence warning issued this week by the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The list of signatories is impressive, including many executives and software engineers at leading AI companies. Yet for all the apparent sincerity on display, this statement it is more likely to generate backlash than assent.

The first problem is the word “extinction.” Whether or not you think the current trajectory of AI systems poses an extinction risk – and I do not – the more you use that term, the more likely the matter will fall under the purview of the national security establishment. And its priority is to defeat foreign adversaries. The bureaucrats who staff the more mundane regulatory agencies will be shoved aside.

U.S. national security experts are properly skeptical about the idea of an international agreement to limit AI systems, as they doubt anyone would be monitoring and sanctioning China, Russia or other states (even the UAE has a potentially powerful system on the way). So the more people say that AI systems can be super-powerful, the more national-security advisers will insist that U.S. technology must always be superior. I happen to agree about the need for U.S. dominance – but realize that this is an argument for accelerating AI research, not slowing it down.

A second problem with the statement is that many of the signers are important players in AI developments. So a common-sense objection might go like this: If you’re so concerned, why don’t you just stop working on AI? There is a perfectly legitimate response – you want to stay involved because you fear that if you leave, someone less responsible will be put in charge – but I am under no illusions that this argument would carry the day. As they say in politics, if you are explaining, you are losing.

The geographic distribution of the signatories will also create problems. Many of the best-known signers are on the West Coast, especially California and Seattle. There is a cluster from Toronto and a few from the U.K., but the U.S. Midwest and South are hardly represented. If I were a chief of staff to a member of Congress or political lobbyist, I would be wondering: Where are the community bankers? Where are the owners of auto dealerships? Why are so few states and House districts represented on the list?

I do not myself see the AI safety movement as a left-wing political project. But if all you knew about it was this document, you might conclude that it is. In short, the petition may be doing more to signal the weakness and narrowness of the movement than its strength.

Then there is the brevity of the statement itself. Perhaps this is a bold move, and it will help stimulate debate and generate ideas. But an alternative view is that the group could not agree on anything more. There is no accompanying white paper or set of policy recommendations. I praise the signers’ humility, but not their political instincts.

Again, consider the public as well as the political perception. If some well-known and very smart players in a given area think the world might end but make no recommendations about what to do about it, might you decide just to ignore them altogether? (“Get back to me when you’ve figured it out!”) What if a group of scientists announced that a large asteroid was headed toward Earth. I suspect they would have some very specific recommendations, on such issues as how to deflect the asteroid and prepare defenses.

Finally, the petition errs by comparing AI risk to “other societal-scale risks such as pandemics and nuclear war.” Even if you agree with this point, it is now commonly recognized that – even after well over 1 million deaths – the U.S. is not remotely prepared for the next pandemic. That is a huge failure, but still: Why lump your cause in with a political stinker? As for nuclear war, public fears are rising but it is hardly a major political issue. I am not denying that it is important, just questioning whether it will have the desired galvanizing effect.

Given all this, allow me to make a prediction: Existential risk from AI will be packaged and commodified, like so many other ideas under capitalism, such as existential risk from climate change. I expect to soon start seeing slogans about the need for AI safety on T-shirts. Perhaps some of them will have been created by the AIs themselves.

OPINION

en-us

2023-06-03T07:00:00.0000000Z

2023-06-03T07:00:00.0000000Z

https://pawtuckettimes.pressreader.com/article/281565180149072

Alberta Newspaper Group