By Will Knight | 07.27.23 |
This week, WIRED published a feature story I've been working on for a while, about how the US military is trying to use artificial intelligence to gain an edge over its adversaries. In the article, I look at the Pentagon's attempt to adapt the way it acquires new technology for the age of AI and at how one task force with a hacker ethos is circumventing bureaucratic red tape. I also report on how quickly military AI technology is advancing as startups test drone swarms capable of overwhelming air defenses in future conflicts. Reporting on the obscure world of US defense procurement gave me a better idea of why AI is widely seen as crucial to the future of military power and why the US has a distinct though not insurmountable advantage. It became clear to me that military use of AI will not be constrained, especially when the war in Ukraine has shown that conflict between powerful nations is not a thing of the past. |
ChatGPT and the Lure of AI Warfare 🚢 🤖💥 |
CEO of Scale AI Alexandr Wang, American Enterprise Institute fellow Klon Kitchen, and GlobalAI ethicist at DataRobot Dr. Haniyeh Mahmoudian testify during a House Armed Services Subcommittee on Cyber, Information Technologies and Innovation hearing about artificial intelligence. |
The United States military is not the unrivaled force it once was, but Alexandr Wang, CEO of startup Scale AI, told a congressional committee last week that it could establish a new advantage by harnessing artificial intelligence. "We have the largest fleet of military hardware in the world," Wang told the House Armed Services Subcommittee on Cyber, Information Technology and Innovation. "If we can properly set up and instrument this data that's being generated ... then we can create a pretty insurmountable data advantage when it comes to military use of artificial intelligence." Wang's company has a vested interest in that vision, since it regularly works with the Pentagon processing large quantities of training data for AI projects. But there is a conviction within US military circles that increased use of AI and machine learning are virtually inevitable—and essential. I recently wrote about that growing movement and how one Pentagon unit is using off-the-shelf robotics and AI software to more efficiently surveil large swaths of the ocean in the Middle East.
|
Besides the country's unparalleled military data, Wang told the congressional hearing that the US has the advantage of being home to the world's most advanced AI chipmakers, like Nvidia, and the world's best AI expertise. "America is the place of choice for the world's most talented AI scientists," he said. Wang's interest in military AI is also worth paying attention to because Scale AI is at the forefront of another AI revolution: the development of powerful large language models and advanced chatbots like ChatGPT. No one is thinking of conscripting ChatGPT into military service just yet, although there have been a few experiments involving use of large language models in military war games. But observers see US companies' recent leaps in AI performance as another key advantage that the Pentagon might exploit. Given how quickly the technology is developing—and how problematic it still is—this raises new questions about what safeguards might be needed around military AI.
|
This jump in AI capabilities comes as some people's attitudes toward the military use of AI are changing. In 2017, Google faced a backlash for helping the US Air Force use AI to interpret aerial imagery through the Pentagon's Project Maven. But Russia's invasion of Ukraine has softened public and political attitudes toward private sector collaboration with tech companies and demonstrated the potential of cheap autonomous drones and of commercial AI for data analysis. Ukrainian forces are using neural deep learning algorithms to analyze aerial imagery and footage. The US company Palantir has said that it is providing targeting software to Ukraine. And Russia is increasingly focusing on AI for autonomous systems. Despite widespread fears about "killer robots," the technology is not yet reliable enough to be used in this way. And while reporting on the Pentagon's AI ambitions, I did not come across anyone within the Department of Defense, US forces, or AI-focused startups eager to unleash fully autonomous weapons. But greater use of AI will create a growing number of military encounters in which humans are removed or abstracted from the equation. And while some people have compared AI to nuclear weapons, the more immediate risk is less the destructive power of military AI systems than their potential to deepen the fog of war and make human errors more likely. |
When I spoke to John Richardson, a retired four-star admiral who served as the US Navy's chief of naval operations between 2015 and 2018, he was convinced that AI will have an effect on military power similar to the industrial revolution and the atomic age. And he pointed out that the side that harnessed those previous revolutions best won the past two world wars. But Richardson also talked about the role of human connections in managing military interactions driven by powerful technology. While serving as Navy chief he went out of his way to get to know his counterparts in the fleets of other nations. "Every time we met or talked, we got a better sense of one another," he says. "What I really wanted to do was make sure that should something happen—some kind of miscalculation or something—I could call them up on relatively short notice. You just don't want that to be your first call." Now would be a good time for the world's military leaders to start talking to each about the risks and limitations of AI, too. |
|
|
For those not yet tired of ChatGPT mania, here's an interesting in-depth piece about OpenAI and its cofounder and CEO, Sam Altman. (The Atlantic) Tiktok isn't content with just dominating US social media—it wants to take on its ecommerce industry with a new shopping platform too. (The Wall Street Journal) Microsoft's business is booming, thanks to interest in its use of AI, such as OpenAI's GPT-4. Google is also doing quite well, despite fears that its search engine could get disrupted by Bing and ChatGPT. (The New York Times) Here's a new way some experts fear AI could destroy human civilization: by helping to create bioweapons. (The Washington Post)
|
Drones have become the defining weapon of the war in Ukraine, but they're not made by military contractors. |
That's all for this week. For more on the war in Ukraine, check out WIRED's special series on technological ingenuity and human resilience during a deadly conflict. See you next week! |
|
|
|
No comments:
Post a Comment
🤔