Is a Top Secret Alexa the Future of Intelligence Analysis?

The U.S. intelligence community wants to know if computer code can do what top human analysts can.

Alexa

A new Amazon Echo, which is powered by Amazon's Alexa voice assistant. (AP Photo/Elaine Thompson)

By Lee Ferran

There is a possible future in which a bleary-eyed President of the United States treads into the Oval Office in the morning and addresses a seemingly empty room. “Alexa…” the leader of the free world begins.

But rather than asking the digital personal assistant for the weather or sports scores, as people do every morning across the country today, the president makes the usual request for one of the most sensitive national security documents in the world: the President’s Daily Brief (PDB).

Beyond the novelty of being answered by a digital voice, which certainly would have worn off by then, the astounding part of the interaction would be this: In this future, the PBD itself may have been written not by a team of top human intelligence analysts, but by an algorithm – computer code designed to scour not only digitized intelligence reports but the vastness of cyberspace and its dark corners to produce, in seconds, a comprehensive but concise report on the most urgent national security matters of the day, all without human involvement.

This may all be an extreme vision of a future that would give cyber and national security experts the shakes, but the idea of computer code taking on the delicate art of intelligence analysis, for which the PDB is a pinnacle product, is very much alive and growing in the intelligence community today.

Dennis Gleeson, a former director of strategy in the CIA’s Directorate of Analysis, told RealClearLife he could see an algorithm penning a rough draft of the PDB within the decade.

“I wouldn’t say 20 or 50 [years], I’d say five to 10,” said Gleeson, a self-professed “technological optimist.” “When I look at where machine learning, deep learning algorithms are, we’re finally getting to the point of having the computational power to do some really amazing things.”

In a world where neural networks, artificial intelligence and machine learning systems have long since invaded the realm of what were thought to be distinctly human endeavors, from literature to pop music to (self-gasp!) journalism, it’s only natural that people with high-level security clearances took notice of the possibilities for their own shadowy world.

Top intelligence officials are asking, could computer code make sense of a complex and dangerous world as well as, or better than, the best human experts in America’s spy agencies?

To find out, on Monday the Office of the Director of National Intelligence announced the winners of the “Xpress” Analytic Product Generation Challenge. The contest, which opened to the public in May, tasked computer programmers with designing a system that could mimic the work of human intelligence analysts, but do it much faster.

The ODNI and Defense Department, the challenge description said, “are interested in determining just how far along we are toward achieving the goal of machine-generated finished intelligence.”

The contest was not a real-world simulator as the contestants, 387 of them from 42 countries, did not have access to classified information. In fact, they were restricted to drawing only from past issues of SIGNAL magazine, a national security publication produced by the Armed Forces Communications and Electronics Association (AFCEA).

The trick would be to create a program that could decipher over 15,000 news reports, extract relevant data, make sense of it in context, combine it with other data and then produce a legible report summarizing its findings in response to some question or other. The closer the report was to appearing human-generated, the better.

At the close of the contest, one programmer, Simon Cazals, won three of the four categories for his code’s reporting, but an ODNI official said that none of the automated reports matched the quality of those of trained humans. For now, it seems human intelligence analysts are secure in their career, at least when it comes to the robotic threat.

Still, the official, ODNI Directorate of Science and Technology program manager for the challenge David Isaacson, said afterward that Cazals’ program was able to produce responses, drawn from thousands of SIGNAL news articles, in “about 10 seconds.” What the automated reports lack in quality, they might be able to make up for in blinding speed.

“Ultimately, such AI-enabled approaches may afford decision-makers a parallel intelligence production model that allows them to rapidly determine if such a machine-generated output is ‘good enough’ for their pressing information needs,” Isaacson said.

Gleeson said in the real world, which is far more complex than a single news outlet’s archive, machine learning still has a ways to go in overcoming the main obstacle that is “unstructured data.”

Computer code is extremely good at reading data that comes in tidy data sets, like stock market prices or sports box scores, but still has trouble with formats that are much messier but naturally sensible for humans like news reports from different outlets, speeches or emoji-laden tweetstorms.

“I think that as you begin to think about moving beyond highly structured, often quantitative data, it becomes significantly harder,” he said. “Because context matters.”

An algorithm may be able to reliably tell the overall story of a baseball game based on the stats, Gleeson said, but it can’t yet discuss the nuances of a particular pitch selection or the impact of energetic fans on a team’s spirit. There are some nebulous details like that which, when applied to national security, could be the difference between life and death for soldiers half the world away.

Beyond the actual readability of the data, analytical algorithms would also have to attach some judgment of confidence to each source – who’s more likely to be accurate, to be lying, to have been misled, to be holding something back – all of which is part of the “tradecraft” that professional intelligence analysts take years to learn in training and on the job.

As it evolves, accounting for all that complexity and nuance means the code’s own complexity is likely to balloon exponentially, which presents a whole new challenge.

“When you’re dealing with intelligence analysis, it’s a pretty high-stakes game,” Gleeson said. “If you come out with a bold assessment, someone will ask you, ‘Well, how did you come up with this?’”

Right now, policy-makers can just call up the analysis and ask how they got there. But a highly complex algorithm may be a “black box” when it comes to procedure, Gleeson said. Then it becomes nearly impossible to spot a mistake made along the way.

“The farther you go away from situations where you have a really good sense of the data – and a high degree of confidence in the data and the underlying methodology – the farther away you go from that, the riskier it becomes,” he said.

Gleeson’s focus late in his CIA career was on analytic modernization and transformation in the agency, and he said the intelligence community is right to go down the path of deep learning algorithms in order to deal with the ever-growing, non-stop onslaught of data created daily and overflowing online.

He said he can easily picture a world where a machine learning algorithm works alongside human analysts to more quickly produce quality analytical products. Let the machine handle the “rough draft” for the basics, let the human experts add the context and nuance.

But does he see a future where Alexa gives the President an early-morning PDB that was produced from start to finish by an algorithm – all without a human involved?

“It might not be that a customer, a policymaker ever interacts with –” he said before catching himself. “I shouldn’t say ‘not ever.’”

Lee Ferran is an Emmy Award-winning investigative journalist and the founder of Code and Dagger, a foreign affairs and national security news website.

Exit mobile version