Russia’s next generation of strategic weaponry may be a bit more distant and a bit less fearsome than Vladimir Putin recently claimed. But his March 1 speech about titanic ballistic missiles and nuclear-powered undersea drones should spur American defense and technology communities to move faster — indeed, uncomfortably so — to embrace similarly disruptive ideas such as artificial intelligence and robotics.
There are myriad ways to try to understand how robotics and autonomy will change warfare. The more creative, such as theater, the better. The short story remains one of my favorite ways to work through these kinds of questions, particularly with the idea that a carefully written narrative can help check our assumptions and biases about how we want things to unfold vs. how they might actually. My latest military future fiction short story is Operation CANDLEMAKER. It follows two frontline characters present when US forces employ autonomous weapons in combat for the first time. My favorite feedback about the story so far is that “the laws of physics and Murphy prevail.” High praise indeed.
After seventeen years in the US Navy, Commander Wayne McCabe got seasick for the first time when a robot had the helm.
Technically, there was no actual metal humanoid at the controls because the 130-foot Sea Hunter-class trimaran warship was driving itself, six miles south of Jazireh-ye Larak in the Strait of Hormuz. McCabe ground his teeth as he fought the urge to throw up yet again and wondered what he was really doing aboard the USS Nantucket. McCabe adjusted the five-point harness on the captain’s chair by feel and looked at the spot on the console in front of him where the ship’s chief engineer had duct taped a red “NO” plastic button from a party store. Just out of reach. Fitting.
If McCabe hadn’t been aboard, then it would have essentially been a ghost ship. The nine other Sea Hunter-class ships in his squadron were unmanned and were the only ships in the mine-laden waters, making him the sole American sailor in the entire strait. The ships ran as close to silent as possible, communicating just by laser burst. They kept watch using infrared search and tracking sensors that flew like parasails 1,000 feet above the ship. In the middle of this summer night, the Nantucket was all but invisible.
At least it was cool, if not cold, sitting in the “fridge,” as he had jokingly called the bridge because of the onboard air conditioning constantly battling to keep the floating computer within its optimum operating range. He wore a tan aviator’s flight suit and augmented-reality (AR) helmet, deepening his sense of irony over his lack of control. This deployment was going to be hard to explain to the kids; he was aboard the Nantucket, at the cutting edge of naval warfare, but he was no more than a passenger. He was technically in command of the entire squadron, yet practically, he was in charge of nothing. But you couldn’t court martial an algorithm, so the Navy brass had to keep a human “in the loop” in case things went awry with the onboard autonomous combat system.
It was a packed house, just not the usual crowd for a think tank event.
But last week in London, an unusual evening of theater and discussion about artificial intelligence and the future of conflict brought together more than 200 people, including actors and art students, military and civilian government officials, tech and defense industry, among others.
The event, “Staging the Future: Artificial Intelligence and Conflict,” was put on by the Atlantic Council and the Royal United Services Institute, in partnership with Central St. Martins and the Platform Theatre. There are myriad efforts underway currently to better understand, and prepare for, a future in which computers and other machines can operate with human-like reasoned judgments and individual initiative but many of these reports or conferences overlook the crucial questions of the human element. As theater is inherently an analog – and live — activity, it focuses the audience’s attention on the actors on stage.
The audience of venture capitalists, engineers and other tech-sector denizens chuckled as they watched a video clip of an engineer using a hockey stick to shove a box away from the Atlas robot that was trying to pick it up. Each time the humanoid robot lumbered forward, its objective moved out of reach. From my vantage point at the back of the room, the audience’s reaction to the situation began to sound uneasy, as if the engineer’s actions and their invention’s response had crossed some imaginary line.
If these tech mavens aren’t sure how to respond to increasingly life-like robots and artificial intelligence systems, I wondered, what are we in the defense community missing?