Sunday, September 28, 2014

Comey vs. National and Personal Security

FBI Director James B. Comey has been complaining this week about new "encrypted by default" system releases from Apple (IOS 8) and Google (forthcoming Android). Though he does have the grace to acknowledge the importance of warrants in passing, his main message is classic Fear, Uncertainty, and Doubt (FUD):


"There will come a day -- well it comes every day in this business -- when it will matter a great, great deal to the lives of people of all kinds that we be able to with judicial authorization gain access to a kidnapper's or a terrorist or a criminal's device,” the Huffington Post quoted Comey as saying. “I just want to make sure we have a good conversation in this country before that day comes.”

“I'd hate to have people look at me and say, 'Well how come you can't save this kid,' 'how come you can't do this thing,’” he added.
Mr. Comey seems to forget that law enforcement did pretty well before widespread instant communication and the technology for wiretaps ever existed. Not long ago, law enforcement was accomplished with boots on the ground and good detective work.

I'm not sure I want to push law enforcement back to those days, but it does point out that access to electronic gizmos is simply not necessary to perform the functions of law enforcement. This isn't a discussion about whether law enforcement can do its job. It's a discussion about how much it costs for law enforcement to do its job. It's about whether and how we will choose to pay those costs. It's about risks to law enforcement officers, because going undercover is a lot more dangerous than stealing data off a cell phone without a warrant. It's also about the danger to society of being exposed to non-consenting, warrantless search. But it's a discussion about cost, not capability.

These technologies from Apple and Google aren't a response to law enforcement overreach. Computer security research heading in this direction - including my own - has been underway for decades. The purpose of that work has been to protect users from theft of sensitive information by anyone. And yes, law enforcement agencies and governments have been viewed as adversaries in that process, because foreign governments and criminals from many countries (including our own) engage in well-funded, systematic, penetrative data collection. We have a personal need, a business need, and a national security interest in preventing device data extraction and analysis at border checkpoints, and also on our connected infrastructure. Many countries have no effective notion of due process at all. Others have rules very different from ours. Technological means are pretty much the only way we have of dealing with that. Homeland Security has funded a lot of this work, and has certainly known the day was coming. Now it's here.

Unfortunately, you can't have it both ways. There's overwhelming evidence that if a secure device has a back door, then the device's back door will be penetrated non-consensually by multiple parties. This leaves Mr. Comey in a position that is both awkward and exposed: bent over. He can have national and personal security, or he can have technological means for non-consenting access by law enforcement, but he can't have both. There is no national security without individual security; too much of our national security relies on civilian infrastructure and operations.

We can't solve this by giving Homeland Security (or any agency) electronic keys to everything. Every government and criminal enterprise in the world would try to get at them, and any packaging of those secrets that makes them practically effective for law enforcement use would be unacceptably exposed to compromise. Once compromised, every secure device becomes scrap. Backdoor keys are what's known as a "high value target".

But the real beauty of back door keys is that you don't actually have to steal them to cause national-scale damage. If you can make people think you stole the keys, that's billions of dollars of secure devices scrapped, with all of the attendant disruption as people scramble to figure out what was lost, rebuild the national banking system, and so forth. And we all know that nobody panics in a banking crisis, right? A witch hunt would emerge, determined to find losses that don't exist. All you would have to do to prolong the damage is slowly leak a few files obtained through more conventional means to the media, sit back, relax, and enjoy the show! Ed Snowden has been showing us all how that game is played. I can't decide if this is more likely to be attempted first by a government agency or some form of organized crime group.

Meanwhile, I think it's clear by now that American law enforcement has conspired to violate our constitutional search and seizure protections in every way they can conceive. We may rein them in temporarily, but it is the nature of bureaucracies to expand their reach wherever possible. The consequences of giving up our privacy are far-reaching and very hard to understand. Where technology is concerned, abuses can be very hard for the victim to observe, and even harder to combat. The risks from data collected now may emerge for years. It doesn't take an evil government for this to happen; only a monofocused, well-intended individual who abuses a position of power. Personal data about the general populace in the hands of the government is a ticking bomb.

For law enforcement and intelligence agencies, the only real alternative is to put more boots on the ground and aggressively pursue those technologies that are constitutionally legitimate. The consequences of increasing operational personnel budgets are pretty easy to understand. It's money out of your pocket, but it's money with clearly measurable impact, benefit, and risk. As with government, there are individuals in law enforcement who abuse their power, but we can observe those abuses, and we have many well-tried mechanisms to combat them.

Protecting privacy is important, but it comes with consequences. The question isn't whether the Apple and Google decisions about encryption are good or bad. The questions are:
  1. How much are we, as a society, willing to pay for our privacy in order to maintain an appropriate balance between safety and privacy? 
  2. In what form will we pay it? One we understand, or one with on-going consequences that we can't possibly understand, and are readily subjected to abuse? 
  3. Should we place the entire country (and perhaps much of the world) at such high risk in order to make Mr. Comey's job cheaper and easier? 
  4. If we bend over, how many times will we get back-doored, and by whom?

Sunday, December 15, 2013

Insider Threats vs the Customer

Earlier today, on David Farber's Interesting People mailing list, Robert Anderson wrote (in part):
A quote like, “We weren’t able to flip a switch and have all of those changes made instantly,” strikes me as indicating gross incompetence by security professionals at NSA. They have known practical mitigation steps for over a decade, and didn’t take the care to assure that they were implemented in all relevant sites. Almost all writings on the subject have stated that the insider threat is the greatest threat to information security, so it should have been extremely high on anyone’s priority list.
Bob is exactly right. One reason that people on the NSA side of Snowden disclosures are so eager to pillory Mr. Snowden is that he had the temerity to point out what we in the security community have known for decades: the emperor had no clothes. The internal security model at NSA has long been "you're on the inside or you aren't", because actually implementing "need to know" would hamper speed of response. But also because it would require making much more credible assessments about which documents are sensitive. I'm sorry; a document drawn from an open, public source can't rationally be labeled secret in any responsible approach to security management. Yes, the fact that you are focusing on that document may provide information to an opposing force. The problem is that you end up labeling everything sensitive, with the result that it becomes impossible for your team to treat the notion of sensitivity appropriately. But you can't admit that, which drives the participants to an insider/outsider bunker mentality and an ever growing pool of "cleared" people. You eventually end up in a mindset from which it appears justifiable to archive the metadata of your entire country without a warrant, because it has become necessary to destroy the constitution to save it.

But that being said, it has been my experience that there are two kinds of "good guy" security professionals:
  1. Those who actually care about making things fundamentally (exponentially) harder for attackers. As near as I can tell, these either burn out or they convert to "type 2" (below). They burn out because fundamental solutions of this sort don't lend themselves to gradual deployment, so no individual customer or reasonably sized set of customers have any hope of making progress even when a technical solution exists. The result is that nobody pays for security that works, so most people don't believe that workable security is possible. The customers come to see security as an ever-increasing tax with no discernible benefit. The people with foundational technical solutions come to feel marginalized. They either give up in frustration and burn out, or they somehow acclimate themselves to the view that "patch and pray" is monetizable and better than nothing.
  2. Those who promulgate the "patch and pray" model of security. These are the folks who sell antivirus tools, packet inspection tools, firewalls, and the like. It's not that they don't care for fundamental solutions - some do, some don't. It's that they've come to recognize and accept that the customer's human nature largely precludes deploying those solutions. And however much I may hate the fact that the "patch and pray" approach extends the life of fundamentally flawed platforms, it has to be said that the customers are making the right economic decisions in the short term. As a customer, I can either buy your patch with low, known risk to my operations and some temporary benefit (however small), or I can buy a deep fix whose technical effectiveness is rarely easy to predict and whose deployment is expensive, highly disruptive, and places my business at significant risk.
The hell of it is, the customers aren't wrong in their assessment. Worse: the kinds of security standards (TCSEC, Common Criteria) that have been promulgated in the past don't offer a particularly useful framework for a solution, so nobody really knows what a "gold standard" should look like. From this perspective, it's pretty easy to see that the NSA has acted just like any other customer might act in failing utterly to deal with insider threat. Which is tragically funny, because the NSA had the mandate to develop effective secure computing standards for 40 years, and did almost everything imaginable to ensure that no success was possible.

Meanwhile, for all the other customers, the "one of the good guys" agency that promulgated key elements of our cryptographic infrastructure is now revealed as not such a good guy after all. How does the poor customer decide who to trust in the future?

The answer, for better or worse, lies in open processes, open source code, and open validation. Solutions in which a customer (or a set of customers) can pay a second party who works for them to validate vendor assertions. Systems in which the validation of those assertions is wholly or in substantial part automated. Systems in which, by construction, the loud brayings of vested interests are unable to drown out the truth in the way they managed to do with cigarette smoke, asbestos, and global warming.

The really unfortunate part of this is that it isn't enough to create and deploy a defensible technical framework at great expense and development risk. You also have to have a strategy to get the message heard while you fight a patent system that stands squarely in the face of technical innovation by non-incumbents.

So the NSA does nothing effective about the insider threat and the good guys continue to burn out. Nothing to see here. Move along.

Tuesday, October 1, 2013

The Cost and Responsibility for Snowden's Breaches

The press has lately been recirculating stories about the dollar damages of the Snowden disclosures. The repudiation of key cryptography standards - the ones that underly our electronic currency exchanges and clearinghouses, and are present in an overwhelming number of products - may in the end cost billions of dollars of damage. Some of the press would have us believe that all of this is Snowden's fault. Better, some feel, to focus attention on the messenger and protect the perpetrator. Or even if not better, easier. It sells more papers to focus on a "David vs. Goliath" story than to examine whether Goliath was actually a Philistine.

In compromising these cryptography standards, NSA's alleged goal was to read the electronic communications of terrorists, arms dealers, and other savory characters. In a world of open cryptography standards, the only way to do that was to compromise everybody. That includes ordinary citizens, businesses, governments (ours and others), armed forces command and control, domestic and global financial systems, and so on. This goes beyond privacy. Cryptography sits under all of our most essential electronic communications. Focusing on Snowden has people asking "How safe are my secrets from the NSA?" when a more pertinent question might be "Is my bank still safe from the eastern block mafia and the terrorist of the month?" Banks for the most part don't operate by storing dollar bills; they operate electronically. Then there is the power delivery infrastructure, or... the list goes on. That is what NSA compromised. And when you understand that, it becomes clear that the damage to us was far worse than any cost to the terrorists. In fact, the damage is proportional to your dependence on electronic infrastructure.

That's bad. Because it means that people inside our government, at the direction of government officials, sworn to protect and defend the constitution and the country, actively conspired to undermine every segment of the United States along with our key allies. While the run-of-the-mill staff may not have understood this, the more senior people at NSA knew what they were doing. They were certainly told by people on the outside often enough. Frankly, I think some of them should hang. And I mean that literally. These decisions by NSA weren't made by extremist muslims. They were made by people from Harvard, Yale, and Princeton (and elsewhere) right here in America.

But there is something worse. In a certain sense, the NSA's primary mission is the discovery of secrets. Being in the secret breaking business, one of the things they know very well is that the best way to break a secret is to get someone to tell you what it is. And there is always someone who will tell you, either out of conviction or out of fear of compromise. There was never a question whether the fact that NSA compromised every first world and second world country would leak. The only questions were who would leak it and how soon. It happened to be Snowden, but if not for Snowden it would have been somebody else.

So setting aside the technical damage, there is the fact that the U.S. Government is now known - and more importantly, believed - to have compromised ourselves and our allies. We need to ask what the consequences are of that. Here are some questions that suggest themselves:
  1. Cryptography is clearly too important to entrust to the government. Who can we trust?
  2. Fragmentation seems likely. Does that help or hinder us?
  3. Do the issues differ for communications cryptography vs. long-term storage cryptography? Given that communications is recorded and stored forever, I suspect not.
  4. Can our allies ever again trust an American-originated crypto system? Software system? Can we trust one from them?
  5. Can our allies ever again afford to trust an American manufacturer of communications equipment, given that every one of the major players seems to have gotten in bed with NSA when pressured to do so by the U.S. Government?
  6. What other compromised technologies have been promulgated through government-influenced standards and/or back room strong arm tactics?
One thing seems clear: we must now choose between the credibility of American technology businesses and the continuation of export controls on cryptography and computer security technology. The controls are ineffective for their alleged purpose; there are too many ways to circumvent them. The main use of these laws has been to allow government pressure to be brought to bear on vendors who won't "play ball" with U.S. Government objectives. As long as the big players in the U.S. computing and networking industries can be be backdoored by their government (take that either way), only a fool would buy from them. If the goal is to destroy the American technology industry, this strategy is even better than software patents. As long as those laws remain on the books, the American tech sector has a credibility problem.

A second thing seems clear: we need to move to openly developed standards for critical systems, not just open standards. And not just openly developed standards, but standards whose "theory of operation" is explained and critically examined by the public. No more unexplained magic tables of numbers. We need fully open public review, and public reference implementations as part of the standardization process.

A third thing seems clear: fixing the cryptography doesn't solve the problem. Even with back doors, the best place to break crypto is at the insecure end points. We need to develop information management methods (e.g. "zero knowledge" methods, but also others) and software architectures that let us limit the scope of damage when it occurs. The operating systems - and consequently the applications - that we are using today simply weren't designed for this. Fortunately, the hardware environment has converged enough that we can do a lot better than we have in the past. There will never be perfect security, but we can largely eliminate the exponential advantage that is currently enjoyed by the attacker.

Thursday, March 24, 2011

Why I Use T-Mobile

Lots of people have piped up about abstract issues in the AT&T acquisition of T-Mobile. I think it's a horrible idea, and my reasons are pretty concrete.

Several years ago, I became a Cingular (now AT&T Wireless) for 10 days. I wasn't happy with their service, and I went back to T-Mobile. I cancelled my $20/month unlimited data plan within their 14-day cancellation period, and the monthly data fees should have been pro-rated. Instead, I got billed for over $3,700 of off-contract data usage. I had done a handset exchange on day two, and the salesperson didn't update their database properly to show that the new handset was now on the contract.

I spoke to a sequence of people in Cingular customer service, and every single one agreed that the charge was incorrect. In spite of this it took nine months, a long-standing ding on my credit report, repeated contact with their executive complaint group, and ultimately the threat of a lawsuit before they got the matter fully resolved, even though it was never in question that the error was theirs.

I'll leave aside whether their off-contract usage was sensibly priced, and whether such a pricing structure wasn't inherently usurious at a time when smart handsets were new and often didn't offer any way to fully disable their data usage if the customer did not need that. AT&T, Inc.: We Scare Because We Care.

Today I'm once again a very reluctant AT&T customer for my iPad, but all of our handsets remain on T-Mobile. Setting aside my minor billing mishap, the reasons are pretty simple:
  • T-Mobile customer support answers the phone at hours when I can actually call them. AT&T doesn't.
  • AT&T's customer-service has improved since my last run-in with them.
  • In my experience as a user, AT&T doesn't reliably honor their roaming agreements. I've only found this to be true where T-Mobile service was dicey and AT&T offered four bars. Thankfully, it's only a problem at Disney World and my parents' house, and only for the last decade or so.
  • T-Mobile offers handset and device options that AT&T is slow to implement. No current-generation Android tablets seem to be contemplated over at AT&T. It reflects a different attitude about their respective customers.
  • T-Mobile's international roaming in Europe is priced very reasonably. That's useful when I travel.
The last point also explains why I'm not a Verizon customer - their phones don't work in Europe at all.

As a reasonably happy T-Mobile customer, my concern with the proposed merger is that as they consolidate operations between the two organizations, AT&T will retain the wrong customer service group, the wrong management team, and the wrong product management team. Other than those small inconveniences, the merger sounds like a fine idea.

Where's Lilly Tomlin when we need her? One Ringy-Dingy...

Tuesday, March 22, 2011

Tethering and the Cost of Wireless

I'm a developer, which means that I look at a lot of options that aren't necessarily appropriate for consumers. I'm also an entrepreneur, which means that I care about the cost of essential services, including data.

In the last few weeks (March 2011), I found myself looking at the current and upcoming Android tablet devices. Android tablets aren't quite ready for prime time unless you are willing to install a few add-ons and put up with some teething pains. I'll take that up in another blog entry. The surprise is that certain features of Android Gingerbread (phone) and Honeycomb (tablet) are game changing in ways that haven't been examined in mainstream reviews.

Most users who buy a current-generation tablet device already own a cell phone. I currently own an iPad (Apple/AT&T), a Xoom tablet (Android/Verizon) and a Samsung Galaxy-S phone (Samsung/T-Mobile, the original Vibrant handset). The iPad is an AT&T device, and I purchased it in spite of bitter negative experiences with AT&T. The Vibrant is with T-Mobile, my long-term cellular carrier, because I travel internationally. The Xoom is with Verizon. In my opinion, David Pogue (NYT) and Walt Mossberg (WSJ) overlooked significant issues in their comparisons of the iPad vs. the Android Tablets.

If you compare the one-year cost (including data plans) of these devices, you will find that the costs are identical. The Android tablets are unquestionably rougher, and they need a few, cheap, applications installed to achieve parity, but once you do this a direct comparison comes out closer than the Mossberg/Pogue reviews suggest. If you need a device that works right now, get the iPad. Otherwise, remember that the reviews compare a pre-release Xoom to a year-old iPad. But the Android devices offer WiFi tethering, and that changes the game. Depending on how immediate your needs may be, you may also want to consider that Android handset numbers overtook iPhone handsets in less than a year.

From a cost perspective, the important question is: how much data do you need to pay for over the first year of operation? Also: which device should you use as your source of connectivity? The Gingerbread (Android 2.2) and Honeycomb (Android 3.0, tablet) devices all have the ability to serve as access points for other devices.

I am a heavy data user. I have measured my usage since the day the iPad released on April 3, 2010 (today is March 22, 2011). Since the day I purchased the device, I have used a grand total of 5GB of download capacity. Some caveats:
  • I don't download movies when I am on the road. Movies eat capacity. I transfer movies at home, in advance of my trips.
  • My iPad is first-generation, which means that it isn't capable of video calls. Video calls require less capacity than movies, and I don't (yet) have data on the impact of this.
Subject to these two caveats, my total usage over nearly a year is 5GB. On average, that is much less than 1GB per month. WiFi at home covers the rest. Your usage may be different, but data usage varies greatly from one month to the next. It is probably worth your effort to look.

In my case, the data usage numbers mean that if I pay for 1GB (or more) on each of my devices, I'm paying for redundant service. If one device could serve the others, I'ld be much better off paying for cellular access on just one. On the current generation Android devices this capability is standard.

Today, I am paying for three data services:
  • The AT&T "unlimited" plan at $35/month. This is no longer available.
  • The T-Mobile "unlimited" data plan at ~$20/month. On my phone this delivers 7Mbits. On current phone it delvers 21 MBits.
  • The Verizon 3G plan at $35 per month for 5Gbytes/month. Delivered speed not calibrated by me.
I have tested both the iPad and the Xoom using the T-Mobile phone as an access point, with two results:
  • There is no noticeable loss of speed when these devices operate through my T-Mobile phone.
  • There is no noticeable increase in latency when these devices operate through my T-Mobile phone. In English: video calls work fine.
Given this, there are only a few reasons to maintain multiple cellular data plans:
  • You own an Apple or a Microsoft device, which do not provide tethering functionality.
  • You want the ability (as I do) to test responsiveness on different networks.
  • You are insensitive to usage-driven cost metrics.
  • You find the complexity cost of enabling the access point functionality higher than the cost of paying for multiple plans. For many users this is a legitemate concern!
There is also a more subtle consideration. Tethering burns power, which raises the question: Where can you recharge? Current tablet device chargers require a 120V outlet. This means that they cannot be recharged in your car. Most cell phones today can be recharged from USB power. There are a wide range of car adapters for cell phones. Unless your car provides 120V A/C outlets, you can't recharge a tablet.

So if you are interested in consolidating your data plans, you need to consider the problem of power. When using your tablet device in your car, it's relatively easy to plug your cell phone in to a USB adaptor and use it as an access point. In a hotel, or at home, you can reasonably expect to have direct access to both power and WiFi. But the unlimited data plan on your cell phone is something that you probably pay for already, while the limited datra plan on your tablet device is - at least by comparison - expensive. The currently advatageous pricing and charging point is to tether your tablet through your cell phone.

It seems evident that we are looking at a pricing regime that cannot last. Third parties will make on-demand activation of access points possible, and competitive forces will make it difficult for cellular carriers to intercede. Either the Google branding effort (and consequently tethering) will fail, or the cost of service for multiple devices will drop. Which one seems up for grabs.

What does seem clear is that an investment in tethering as a baseline capability - and consequently an investment in Android handsets and tablets - is likely to reduce the de practico cost of data services. The more people who buy Android devices (of any flavor) today, the more downward pressure will exist on data bandwidth pricing.

Speaking for myself, I find that the battery capacity of my Vibrant, coupled with a USB adapter in my car, is more than adequate to serve my mobile data demands. Your mileage may vary, but if nothing else, try the experiment.

Edit (3/22): Peter Sahlstrom points out that tethering with a USB cable has been possible for some time. He is right, of course. It has also been available over bluetooth, though at fairly slow speeds. Both are fairly awkward solutions, and in most cases they violate your wireless agreement. The difference here is that the mechanism is both "official" and convenient. I can turn on WiFi tethering, stuff the phone back on my belt, and then use the other devices conveniently and unobtrusively, and without any cables getting in the way.