Sunday, December 15, 2013

Insider Threats vs the Customer

Earlier today, on David Farber's Interesting People mailing list, Robert Anderson wrote (in part):
A quote like, “We weren’t able to flip a switch and have all of those changes made instantly,” strikes me as indicating gross incompetence by security professionals at NSA. They have known practical mitigation steps for over a decade, and didn’t take the care to assure that they were implemented in all relevant sites. Almost all writings on the subject have stated that the insider threat is the greatest threat to information security, so it should have been extremely high on anyone’s priority list.
Bob is exactly right. One reason that people on the NSA side of Snowden disclosures are so eager to pillory Mr. Snowden is that he had the temerity to point out what we in the security community have known for decades: the emperor had no clothes. The internal security model at NSA has long been "you're on the inside or you aren't", because actually implementing "need to know" would hamper speed of response. But also because it would require making much more credible assessments about which documents are sensitive. I'm sorry; a document drawn from an open, public source can't rationally be labeled secret in any responsible approach to security management. Yes, the fact that you are focusing on that document may provide information to an opposing force. The problem is that you end up labeling everything sensitive, with the result that it becomes impossible for your team to treat the notion of sensitivity appropriately. But you can't admit that, which drives the participants to an insider/outsider bunker mentality and an ever growing pool of "cleared" people. You eventually end up in a mindset from which it appears justifiable to archive the metadata of your entire country without a warrant, because it has become necessary to destroy the constitution to save it.

But that being said, it has been my experience that there are two kinds of "good guy" security professionals:
  1. Those who actually care about making things fundamentally (exponentially) harder for attackers. As near as I can tell, these either burn out or they convert to "type 2" (below). They burn out because fundamental solutions of this sort don't lend themselves to gradual deployment, so no individual customer or reasonably sized set of customers have any hope of making progress even when a technical solution exists. The result is that nobody pays for security that works, so most people don't believe that workable security is possible. The customers come to see security as an ever-increasing tax with no discernible benefit. The people with foundational technical solutions come to feel marginalized. They either give up in frustration and burn out, or they somehow acclimate themselves to the view that "patch and pray" is monetizable and better than nothing.
  2. Those who promulgate the "patch and pray" model of security. These are the folks who sell antivirus tools, packet inspection tools, firewalls, and the like. It's not that they don't care for fundamental solutions - some do, some don't. It's that they've come to recognize and accept that the customer's human nature largely precludes deploying those solutions. And however much I may hate the fact that the "patch and pray" approach extends the life of fundamentally flawed platforms, it has to be said that the customers are making the right economic decisions in the short term. As a customer, I can either buy your patch with low, known risk to my operations and some temporary benefit (however small), or I can buy a deep fix whose technical effectiveness is rarely easy to predict and whose deployment is expensive, highly disruptive, and places my business at significant risk.
The hell of it is, the customers aren't wrong in their assessment. Worse: the kinds of security standards (TCSEC, Common Criteria) that have been promulgated in the past don't offer a particularly useful framework for a solution, so nobody really knows what a "gold standard" should look like. From this perspective, it's pretty easy to see that the NSA has acted just like any other customer might act in failing utterly to deal with insider threat. Which is tragically funny, because the NSA had the mandate to develop effective secure computing standards for 40 years, and did almost everything imaginable to ensure that no success was possible.

Meanwhile, for all the other customers, the "one of the good guys" agency that promulgated key elements of our cryptographic infrastructure is now revealed as not such a good guy after all. How does the poor customer decide who to trust in the future?

The answer, for better or worse, lies in open processes, open source code, and open validation. Solutions in which a customer (or a set of customers) can pay a second party who works for them to validate vendor assertions. Systems in which the validation of those assertions is wholly or in substantial part automated. Systems in which, by construction, the loud brayings of vested interests are unable to drown out the truth in the way they managed to do with cigarette smoke, asbestos, and global warming.

The really unfortunate part of this is that it isn't enough to create and deploy a defensible technical framework at great expense and development risk. You also have to have a strategy to get the message heard while you fight a patent system that stands squarely in the face of technical innovation by non-incumbents.

So the NSA does nothing effective about the insider threat and the good guys continue to burn out. Nothing to see here. Move along.

Tuesday, October 1, 2013

The Cost and Responsibility for Snowden's Breaches

The press has lately been recirculating stories about the dollar damages of the Snowden disclosures. The repudiation of key cryptography standards - the ones that underly our electronic currency exchanges and clearinghouses, and are present in an overwhelming number of products - may in the end cost billions of dollars of damage. Some of the press would have us believe that all of this is Snowden's fault. Better, some feel, to focus attention on the messenger and protect the perpetrator. Or even if not better, easier. It sells more papers to focus on a "David vs. Goliath" story than to examine whether Goliath was actually a Philistine.

In compromising these cryptography standards, NSA's alleged goal was to read the electronic communications of terrorists, arms dealers, and other savory characters. In a world of open cryptography standards, the only way to do that was to compromise everybody. That includes ordinary citizens, businesses, governments (ours and others), armed forces command and control, domestic and global financial systems, and so on. This goes beyond privacy. Cryptography sits under all of our most essential electronic communications. Focusing on Snowden has people asking "How safe are my secrets from the NSA?" when a more pertinent question might be "Is my bank still safe from the eastern block mafia and the terrorist of the month?" Banks for the most part don't operate by storing dollar bills; they operate electronically. Then there is the power delivery infrastructure, or... the list goes on. That is what NSA compromised. And when you understand that, it becomes clear that the damage to us was far worse than any cost to the terrorists. In fact, the damage is proportional to your dependence on electronic infrastructure.

That's bad. Because it means that people inside our government, at the direction of government officials, sworn to protect and defend the constitution and the country, actively conspired to undermine every segment of the United States along with our key allies. While the run-of-the-mill staff may not have understood this, the more senior people at NSA knew what they were doing. They were certainly told by people on the outside often enough. Frankly, I think some of them should hang. And I mean that literally. These decisions by NSA weren't made by extremist muslims. They were made by people from Harvard, Yale, and Princeton (and elsewhere) right here in America.

But there is something worse. In a certain sense, the NSA's primary mission is the discovery of secrets. Being in the secret breaking business, one of the things they know very well is that the best way to break a secret is to get someone to tell you what it is. And there is always someone who will tell you, either out of conviction or out of fear of compromise. There was never a question whether the fact that NSA compromised every first world and second world country would leak. The only questions were who would leak it and how soon. It happened to be Snowden, but if not for Snowden it would have been somebody else.

So setting aside the technical damage, there is the fact that the U.S. Government is now known - and more importantly, believed - to have compromised ourselves and our allies. We need to ask what the consequences are of that. Here are some questions that suggest themselves:
  1. Cryptography is clearly too important to entrust to the government. Who can we trust?
  2. Fragmentation seems likely. Does that help or hinder us?
  3. Do the issues differ for communications cryptography vs. long-term storage cryptography? Given that communications is recorded and stored forever, I suspect not.
  4. Can our allies ever again trust an American-originated crypto system? Software system? Can we trust one from them?
  5. Can our allies ever again afford to trust an American manufacturer of communications equipment, given that every one of the major players seems to have gotten in bed with NSA when pressured to do so by the U.S. Government?
  6. What other compromised technologies have been promulgated through government-influenced standards and/or back room strong arm tactics?
One thing seems clear: we must now choose between the credibility of American technology businesses and the continuation of export controls on cryptography and computer security technology. The controls are ineffective for their alleged purpose; there are too many ways to circumvent them. The main use of these laws has been to allow government pressure to be brought to bear on vendors who won't "play ball" with U.S. Government objectives. As long as the big players in the U.S. computing and networking industries can be be backdoored by their government (take that either way), only a fool would buy from them. If the goal is to destroy the American technology industry, this strategy is even better than software patents. As long as those laws remain on the books, the American tech sector has a credibility problem.

A second thing seems clear: we need to move to openly developed standards for critical systems, not just open standards. And not just openly developed standards, but standards whose "theory of operation" is explained and critically examined by the public. No more unexplained magic tables of numbers. We need fully open public review, and public reference implementations as part of the standardization process.

A third thing seems clear: fixing the cryptography doesn't solve the problem. Even with back doors, the best place to break crypto is at the insecure end points. We need to develop information management methods (e.g. "zero knowledge" methods, but also others) and software architectures that let us limit the scope of damage when it occurs. The operating systems - and consequently the applications - that we are using today simply weren't designed for this. Fortunately, the hardware environment has converged enough that we can do a lot better than we have in the past. There will never be perfect security, but we can largely eliminate the exponential advantage that is currently enjoyed by the attacker.