Short Stories, Long Conversations

On the latest US Naval Institute podcast, I had a great time discussing my latest short story, AUTOMATED VALOR, with Proceedings Editor-in-Chief Bill Hamblet and Director of Outreach Ward Carroll. Set in the 2030s, the story follows British forces in an urban fight in Djibouti and asks fundamental questions about the essence of leadership in the era of artificial intelligence. The podcast also covered my motivations as a writer, how to establish a credible narrative in future worlds, my journalistic background, disruptive civilian and defense technologies, and more.

Listen to the US Naval Institute Proceedings podcast episode 35.

Automated Valor

“Move, move, move!” she shouted. The closer the threat, the more her harness tightened, shielding her behind the combat couch’s blast-resistant wings. It felt as if somebody were hammering her coffin lid down while she was paralyzed but still alive. This particular fear was a well-worn track for the 24-year-old private. To suppress the panic, she angrily gloved a salvo of 30 thumb-sized diverters skyward. She quickly followed them with a pair of four-inch pulse-mortar rounds. Those would float gently down on parachutes, shorting out anything electronic within a five-meter radius until they exhausted their batteries. Her haptic suit pinched her to let her know it was overkill for the incoming threat, but it still felt right. She could answer for it when she wasn’t as worried about dying—whenever that day might come.

 

Read the full story at the US Naval Institute’s Proceedings magazine.

Bots, AI: Break With Convention

Russia’s next generation of strategic weaponry may be a bit more distant and a bit less fearsome than Vladimir Putin recently claimed. But his March 1 speech about titanic ballistic missiles and nuclear-powered undersea drones should spur American defense and technology communities to move faster — indeed, uncomfortably so — to embrace similarly disruptive ideas such as artificial intelligence and robotics.

Read more of my op-ed with Spark Cognition CEO Amir Husain at Defense One.

When The Blood Runs Cold

Strategy Strikes Back cover 2018

SIXTH PLANET, HOTH SYSTEM – The tauntaun ran screaming across the crevasses and zig-zag trenches dug into Nev Ice Flow, fur singed black and gold and slathered in crimson.

A tauntaun doesn’t bleed red though. Rebel infantry does.

So starts the short story “When the Blood Runs Cold,” my contribution to the upcoming anthology STRATEGY STRIKES BACK: How Star Wars Explains Modern Military Conflict,” due out in May from Potomac Books. It is an eye-witness news account of the Rebel retreat from Hoth that is modeled on Ernie Pyle’s dispatches during World War II.

While my approach to answering the question of what the Star Wars universe can teach us about contemporary and future conflict relies on fiction, the collection of more than two dozen essays includes analysis from the smartest writers on strategy and military affairs today.

Kudos to editors John Amble, Max Brooks, Matt Cavanaugh, and Jaym Gates for producing such a valuable – and enjoyable – book. It can be read for entertainment as well as for professional development, which will give plenty of people a chance to talk at work about the Death Star’s acquisition travails or the ethics of Rebel tactics or morale within the Imperial cadre.

No writer should be shy about clamoring for pre-orders, and in that spirit here is a link: http://www.nebraskapress.unl.edu/potomac-books/9781640120334/

Sci-Fi And The Military Reader

The view that fiction belongs on modern military reading lists is becoming mainstream. One only need look at the titles on the reading lists put out by US Special Operations Command or the senior officers of the Navy and Marine Corps to see that it has a valued place in military professional development. And this is not limited to the Iliad and the Odyssey. Fiction, and specifically science fiction and future-war fiction, is going mainstream in Western militaries.

Read the full article at RUSI Journal.

Operation CANDLEMAKER

There are myriad ways to try to understand how robotics and autonomy will change warfare. The more creative, such as theater, the better. The short story remains one of my favorite ways to work through these kinds of questions, particularly with the idea that a carefully written narrative can help check our assumptions and biases about how we want things to unfold vs. how they might actually. My latest military future fiction short story is Operation CANDLEMAKER. It follows two frontline characters present when US forces employ autonomous weapons in combat for the first time. My favorite feedback about the story so far is that “the laws of physics and Murphy prevail.” High praise indeed.

After seventeen years in the US Navy, Commander Wayne McCabe got seasick for the first time when a robot had the helm.

Technically, there was no actual metal humanoid at the controls because the 130-foot Sea Hunter-class trimaran warship was driving itself, six miles south of Jazireh-ye Larak in the Strait of Hormuz. McCabe ground his teeth as he fought the urge to throw up yet again and wondered what he was really doing aboard the USS Nantucket. McCabe adjusted the five-point harness on the captain’s chair by feel and looked at the spot on the console in front of him where the ship’s chief engineer had duct taped a red “NO” plastic button from a party store. Just out of reach. Fitting.

If McCabe hadn’t been aboard, then it would have essentially been a ghost ship. The nine other Sea Hunter-class ships in his squadron were unmanned and were the only ships in the mine-laden waters, making him the sole American sailor in the entire strait. The ships ran as close to silent as possible, communicating just by laser burst. They kept watch using infrared search and tracking sensors that flew like parasails 1,000 feet above the ship. In the middle of this summer night, the Nantucket was all but invisible.

At least it was cool, if not cold, sitting in the “fridge,” as he had jokingly called the bridge because of the onboard air conditioning constantly battling to keep the floating computer within its optimum operating range. He wore a tan aviator’s flight suit and augmented-reality (AR) helmet, deepening his sense of irony over his lack of control. This deployment was going to be hard to explain to the kids; he was aboard the Nantucket, at the cutting edge of naval warfare, but he was no more than a passenger. He was technically in command of the entire squadron, yet practically, he was in charge of nothing. But you couldn’t court martial an algorithm, so the Navy brass had to keep a human “in the loop” in case things went awry with the onboard autonomous combat system.

Read the full story at the Art of the Future Project website.

Staging The Future

It was a packed house, just not the usual crowd for a think tank event.

But last week in London, an unusual evening of theater and discussion about artificial intelligence and the future of conflict brought together more than 200 people, including actors and art students, military and civilian government officials, tech and defense industry, among others.

The event, “Staging the Future: Artificial Intelligence and Conflict,” was put on by the Atlantic Council and the Royal United Services Institute, in partnership with Central St. Martins and the Platform Theatre. There are myriad efforts underway currently to better understand, and prepare for, a future in which computers and other machines can operate with human-like reasoned judgments and individual initiative but many of these reports or conferences overlook the crucial questions of the human element. As theater is inherently an analog – and live — activity, it focuses the audience’s attention on the actors on stage.

Read more at The Art of the Future Project website.

Lifting The Lid On Military AI

Among all the voices to consider in the debate over what role lethal autonomous capabilities should play in military and security systems, the very people who dream up and create science-fiction realities are the clearest in articulating the risks of robots run amok or even more devastating human-created technological disasters. The latest letter from 116 senior robotics and AI leaders cautions against the use of artificial intelligence in the defense domain, arguing humanity is at a point of no return. “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” they wrote.

The problem is, however, that this is an era when civilian technology innovation outstrips what is conjured up in government labs. The global “AI” revolution is already underway and its impact will certainly shape future conflict. Don’t expect a Terminator reboot. Pandora’s box then may be the last one to be opened, as Facebook, Google, Baidu, Alibaba, Uber and scores of other companies have already lifted the lids on what is possible with learning machine software and robotics because there is generational society-changing and economic potential on the line. So much so that the US wants to block Chinese investment in certain cases in related technologies. As I told RealClear Defense

August Cole, a senior fellow at the Atlantic Council and writer at the consulting firm Avascent, said the concerns raised by tech leaders on autonomous weapons are valid, but a ban is unrealistic. “Given the proliferation of civilian machine learning and autonomy advances in everything from cars to finance to social media, a prohibition won’t work,” he said.

Setting limits on technology ultimately would hurt the military, which depends on commercial innovations, said Cole. “What needs to develop is an international legal, moral and ethical framework. … But given the unrelenting speed of commercial breakthroughs in AI, robotics and machine learning, this may be a taller order than asking for an outright ban on autonomous weapons.”

Read the full story by Sandra Erwin.

Terrible Twos

As parents well know, the terrible twos are so named for a reason. For authors, publishing date anniversaries — book birthdays – can be equally tricky at the age when a book’s launch buzz is long forgotten and the weight of creative expectations has moved from  crawling underfoot to fluid running.

There might be tantrums.

Or cake and candles.

With Ghost Fleet, the summer of 2017 marked the two-year anniversary since the novel’s launch on June 30, 2015. The book’s recent addition to the Commander of US Special Operations Command and the Chief of Staff of the US Army professional reading lists mean the book’s “twos” aren’t terrible at all. It takes time to build an audience, particularly as readers are busier and busier. The hope is that the connection with characters, stories and concepts continues to spread from person to person, organization to organization with increasing urgency and enthusiasm. The book is already on myriad military reading lists, but seeing it still being endorsed as professionally relevant makes a parent proud.

On the Army’s list, Ghost Fleet joined a short list of esteemed fiction titles, including Gates of Fire by Stephen Pressfield, Matterhorn by Karl Marlantes and Virgil’s The Aenid. “Each of us faces busy schedules every day and finding time to read and think is a recurring challenge. But even as we train our units and physically condition our bodies, we must improve our minds through reading and critical thinking,” wrote Gen. Mark Miley, the 39th Chief of Staff of the Army in his preface to the list.

On the concise SOCOM reading list, Ghost Fleet can be found under “Disruptive Technology” alongside 3D Printing Will Rock The World by John Hornick and The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries by Andrei Soldatov and Irina Borogan. It is the sole fiction title.

By next summer, work on the next book will be well underway and I hope there will be time and occasion, once again, to celebrate another Ghost Fleet birthday.

Don’t Forget The Human Element

The audience of venture capitalists, engineers and other tech-sector denizens chuckled as they watched a video clip of an engineer using a hockey stick to shove a box away from the Atlas robot that was trying to pick it up. Each time the humanoid robot lumbered forward, its objective moved out of reach. From my vantage point at the back of the room, the audience’s reaction to the situation began to sound uneasy, as if the engineer’s actions and their invention’s response had crossed some imaginary line.

If these tech mavens aren’t sure how to respond to increasingly life-like robots and artificial intelligence systems, I wondered, what are we in the defense community missing?

Read more at Defense One.