Using Big Data to Reduce Leaks
The latest leak of sensitive intelligence is cause for concern about how the national security community secures its reports, sources, and methods. Millions of people have access to classified information—including 21-year-old National Guardsmen—and occasionally those people make terrible decisions motivated by ego, greed, or ideology. This leak is likely not the last. It is, however, an opportunity to reexamine existing operational security measures—many of which reflect an outdated bureaucratic model—for safeguarding the nation’s most sensitive intelligence estimates. Just as businesses have adapted to using big data and analytics to secure information, the national security community needs to move into the twenty-first century and embrace the promise of novel technologies for stopping leaks before they spill.
Today’s security focus is physical: emphasis is placed on having local security managers and passing clearances, which is known as security by site. Instead, DOD needs to shift to creating security across the network, finding the blinking red signal in the noise of network activity.
To begin to address this gap, DOD and the intelligence community (IC) in the past few years have implemented a method called Continuous Evaluation (CE) for clearance holders. Meant to urgently flag a development in a clearance holder’s life that could make them vulnerable to recruitment by a foreign intelligence service, CE (also sometimes called continuous vetting) uses data scrapes to immediately flag issues like an arrest or financial trouble. Under previous methods, security violations or risky behavior could sit for years unnoticed between reinvestigations.
This same mindset shift toward intensive, real-time monitoring needs to happen for unusual activity within a classified system. Agencies and departments are beginning to shift their network security practices from a moat approach to a zero trust approach. For the moat, once an individual passes the initial security checks to get onto a system, they have relatively free access within the secure environment. Under zero trust, an individual must demonstrate they have legitimate access not just to the system, but to a particular part of the system. This approach can help flag actors like Edward Snowden, who stole vast quantities of information he had no need to access.
More is being uncovered about what types of information the alleged leaker should have been able to access. Most likely he was approved to cross the moat but had no need to know the variety of things he printed. To stop a leak like this one would require a highly sophisticated form of internal monitoring based on establishing a robust pattern of normal and abnormal behaviors. These could be modeled on the same types of systems that identify credit card fraud—flagging activity that seems out of the ordinary for “normal” patterns. For example, if this recent leaker did not normally print intelligence products, then one day printed an unusual quantity, that would be flagged. If the most recent leaker focuses on cybersecurity for his normal job, but suddenly was accessing reports about China or Iran, that could be flagged as suspicious. A question from a supervisor about unusual activity might be enough to dissuade the person from further activity. In the recent case, perhaps the damage would have been five documents instead of several hundred. Every government employee agrees to be monitored when they are on networks. This produces big data sets with statistical patterns and trends that could be used to establish that pattern of activity.
Beyond adopting industry best practices for using big data and artificial intelligence/machine learning to detect fraud, the IC should consider an even more radical approach: using a blockchain to track access and downloading. A blockchain is essentially a digital ledger that records access. The technology is most commonly associated with cryptocurrencies and nonfungible tokens (NFTs). A blockchain would provide a record of everyone who accessed a document and allow managers to tailor access, turning it on and off. In essence, each intelligence report becomes an entry in the blockchain where access can not only be monitored, but also be granted on a temporary basis or revoked permanently. Combined with big data analytics to monitor for anomalies, this approach would provide fast and effective ways of detecting threats and streamlining access to minimize damage.
Not only would adopting a blockchain limit leaks, it would also provide a mechanism for more rapidly sharing intelligence with key partners. Rather than worry about permanently granting access to a network, intelligence professionals working with foreign disclosure officers could use a blockchain to grant limited and temporary access, again turning on and off access to select documents or even portions of those documents, known as tear lines, to allies and partners.
The key challenge is introducing measures and checks for near-instantaneous intervention to protect classified information while not introducing so much friction that intelligence professionals cannot be fast and agile in their work. Protecting secrets is important, but if an analyst is on a tight deadline to write a product warning of a potential attack, being suddenly unable to access important information because of bad algorithm parameters would be potentially disastrous. Technologies like zero trust and blockchain—used correctly—have the potential to protect classified information while allowing intelligence professionals to do their jobs. Sadly, most leaks are followed by overcorrections that limit agile reporting and create layers of costly bureaucracy. The opportunity cost is clear. Every redundant security official is one less analyst tracking threats. Instead, some of the rote responsibilities of a security job could be done by embracing emerging technologies and business best practices.
It may be time to also monitor publicly available online activity of those with high-level clearances, which would show when individuals who have already agreed to be monitored are experiencing changes in behavior that warrant deeper investigations. Most leaks are linked to ego, and data can see when someone is desperate for attention and validation in a manner that increases national security risks. It is a dramatic level of monitoring, but not far beyond the level of scrutiny those with a clearance already undergo, from financial monitoring to polygraphs to reporting travel and outside contacts. That is the cost associated with the privilege of having access to the nation’s most sensitive secrets.
Beyond CE, the IC and government at large need to learn from industry practices and embrace new technologies. First, they should take a page out of the credit card industry’s book and start crafting behavior-based algorithms for internal activity and pilot it with a unit that is highly respected but rarely operating under a painfully urgent deadline, like the National Intelligence Council, and once proven, roll it out to the large community. Second, embrace the promise of blockchain. Pilot test treating intelligence reports and access as nonfungible tokens that can be tracked, exchanged, and temporarily opened and closed. This test should ensure the introduction of new methods does not slow workflow and allows intelligence professionals to focus on their core mission. In addition, it should include test cases for sharing timely data with partners.
The intelligence community will never stop all leaks, but it can minimize the damage and detect them early enough to preempt disaster. Everybody has the potential to lie, cheat, steal, and leak even the most sensitive secrets, often for the basest of human reasons: the fragility of ego and desire to profit. No government will ever stop leaking and espionage. Wise governments will always find ways to make stealing their secrets more difficult and most importantly, brief.
Emily Harding is the deputy director and senior fellow with the International Security Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Benjamin Jensen is a senior fellow for future war, gaming, and strategy with the CSIS International Security Program.