Return to Security and Privacy

Sensible Security: The Schneier Model

Back in 2001 there was a certain incident on September 11 that lead many people to go “OMG! We are doomed! We must increase security! Do whatever it takes!” And the NSA was happy to oblige. And on 7/7/05 an attack in London added to the frenzy. I think it is fair to say that these security agencies felt they were given a mandate to “do anything as long as it stops the attacks,” and thus was the overwhelming attack on privacy moved to a whole level higher. To be clear, security agencies are always pushing the limits, it is in their DNA. And politicians have learned that you never lose votes by insisting on stronger security and appearing “tough.”

But the reality is that security is never 100%, and the higher the level of security the greater the costs in terms of our privacy and liberty. And it is also the case that total insistence on liberty and privacy would cause your security to go down as well. So you really should not adopt any simple-minded approach to this problem. In general, as you add layers of security, each added layer gives you less benefit. Some simple security steps can give you a lot, but as you add more and more, the added benefit drops, and we call this the Law of Diminishing Returns. By the same token, each added measure extracts an ever-increasing cost in terms of the loss of liberty and privacy. Conceptually, you could draw a couple of curves, one rising (the costs) and the other falling (the benefits), and look for where the curves cross to determine the optimum level of security that balances the costs and benefits, but in practice it is not that simple. Measuring these costs and benefits is tricky, and there is no simple equation for either curve. Nonetheless, the balance does need to be struck.

In the wake of the 9/11 attacks, Bruce Schneier published a book called Beyond Fear: Thinking Sensibly About Security in an Uncertain World (2003). In this book he shows that hysteria is not a good approach to security, and that  you need to ask yourself some questions to see what the cost vs. benefit calculation looks like for you. I am going to draw on his model to talk about security as we are discussing it in this series.

There is an old joke about what constitutes a secure computer. The answer is that it has to be locked in a vault, with no network connection, and no power connection, and even then you need to worry about who can access the vault. It is a joke, of course, because no one would ever do this. We use computers and the Internet because of the benefits they give us, and having a computer in a vault is just a waste of money. We accept a certain degree of risk because that is the only way to get the benefits we want.

Schneier’s Five-Step Process

For any security measure you are contemplating, you need to have a clear-eyed, rational look at the costs and benefits, and Schneier offers a Five-Step Process to accomplish this, This is a series of questions that you need to ask in order to figure out if this particular measure makes any sense:

  • What assets are you trying to protect? This is what defines the initial problem. Any proposed countermeasure needs to specifically protect these assets. You need to understand why these assets are valuable, how they work, and what are attackers going after and why.
  • What are the risks against these assets? To do this you need to analyze who threatens the assets, what their goals are, and how they might try to attack your assets to achieve those goals. You need to be on the lookout for how changes in technology might affect this analysis.
  • How well does the security solution mitigate the risks? To answer this, you need to understand both how the countermeasure will protect the asset when it works properly, but also take into account what happens when it fails. No security measure is 100% foolproof, and every one will fail at some point in some circumstances. A fragile system fails badly, a resilient system handles failure well. A security measure that is slightly less effective under ideal conditions, but which handles failure much better, can be the optimum choice. And a measure that guards against one risk may increase vulnerability somewhere else. And you really need to watch out for False Positive vs. False Negative trade-off. It is a truism that any set of measures designed to reduce the number of false negatives will increase the number of false positives, and vice-versa.
  • What other risks does the security solution cause? Security countermeasures always interact with each other, and the rule is that all security countermeasures cause additional security risks.
  • What trade-offs does the security solution require? Every security countermeasure affects everything else in the system. It affects the functionality of the assets being protected, it affects all related or connected systems. And they all have a cost, frequently (but not always) financial, but also in terms of usability, convenience, and freedom.

And going through this process once is not the end. You need to re-evaluate your choices as systems evolve, as technology changes, etc.

Example: Passwords

I have a cartoon on the wall of my cubicle that shows an alert box that says “Password must contain an uppercase letter, a punctuation mark, a 3-digit prime number, and a Sanskrit hieroglyph”. We’ve all encountered this, and it does get frustrating. This is a humorous take on something that is an accepted best practice. I recall a story about a fellow who worked at a company that insisted he regularly change his password, and would also remember the 8 previous passwords and not let him use any of them again. But he liked the one he had, so he spent a few minutes changing his password 9 times in a row, the last time being back to his favored password. Was he a threat to security, or was the corporate policy misguided? Let’s try Bruce’s model and see where we get.

  • What assets is the company trying to protect? I think this has several possible answers. The company may want to prevent unauthorized access to corporate data on its network. Or the company wants prevent unauthorized use of its resources, possibly with legal implications. And the company may be concerned to prevent damage to its network. All of these are good reasons to try and control who has access to this asset, and to protect it. But knowing which of these is being targeted may matter when we get to trade-offs and effectiveness of the proposed countermeasures. For now, let’s assume the primary interest is in preventing unauthorized access to the data, such as credit card numbers on an e-commerce site.
  • What are the risks against these assets? Well if we are talking about credit card numbers, the risk is that criminals could get their hands on these numbers. From the company’s standpoint, though, the risk is what can happen to them if this occurs. Will this cause them to assume financial penalties? Will the CEO be hauled in front of legislative committees? Will their insurance premiums rise as a result? This the sort of thing companies really care about. And when you understand this, you begin to see why companies all adopt the same policies. When people talk about “Best Practices”, you should not assume that anyone has actually determined in a rational manner what the best practices should be. It only means that they are “protected” in some sense when the things go wrong. After all, they followed the industry “best practices”. The biggest failure of security is when companies or organizations just apply a standard set of rules instead of creating a process of security. I see this criticized constantly in my daily newsletter from the SANS Institute.
  • How well does the security solution mitigate the risks? This becomes a question of whether forcing people to change their passwords frequently is a significantly effective measure in preventing unauthorized access to computer networks. And here is where things really start to break down. It is very difficult to come up with many examples of cases where a password in use for a long time lead to unauthorized access. That is simply not how these things work. We know that the majority of these cases derive from one of two problems: social engineering to get people to give up their password, and malware that people manage to get on their computer one way or another. I suppose you could make an argument that forcing people to frequently change passwords might in rare cases actually do some good, but there is not way to say that this is in general an effective countermeasure against unauthorized access.
  • What other risks does the security solution cause? There are several possible risks that come out of this. First, since all security measures require a variety of resources (and people’s time and attention is one of those resources), emphasizing one security measure may take resources away from more effective measures that don’t get sufficient attention. But there are also risks from how people act in response to this policy. In the ideal world of the security department, each person with access would choose a long, complicated password each time, chosen for maximum entropy, and then memorized but never written down. Sadly for the security department, they have to deal with actual human beings, who do not do any of these things. Most people at the very least consider this an annoyance. Some may actively subvert the system, like the fellow in our story who changed his password 9 times in a row to get back to the one he liked. But even without this type of subversion, we know what people will do. If you let them, they will choose something that is easy to remember as their first attempt, and that means they will most likely choose a password that can easily be cracked in a dictionary attack. If you instead insist that each password contain letters, numbers, upper and lower case, a Sanskrit hieroglyph, and two squirrel noises, they will write it down, probably on a yellow sticky note attached to their monitor. If the person in question is a top level executive, it gets even worse, because they won’t put up with the BS ordinary worker bees have to tolerate.
  • What trade-offs does the security solution require? This policy causes a major impact on usability and convenience, and all of this for a policy that we saw above actually accomplishes very little. In the majority of organizations the IT department is viewed with a certain amount of hostility, and this is part of it. In addition, anyone in an IT Help Desk can tell you that they get a lot of calls from people who cannot login because they forgot their password, which is a natural consequence of forcing people to keep changing it.

Bottom Line

So what does all of this mean in the final analysis? I think it means that you need to carefully consider which measures are actually worth taking. And this is, at least in part, a cost vs. benefit analysis. For instance, as I write this the Heartbleed vulnerability is in the news a good deal, and I got to hear Bruce Schneier discuss how people should react. And he did not say “OMG! Change all of your passwords right now!” He said you should assess the case. If it is your password to login to your bank, that is probably something you want to change. But if it was some social network you access once every two weeks, you needn’t bother. And that seems reasonable.

And as another example, although I have discussed how to encrypt e-mails and digitally sign them, that does not mean I open up GPG every time I send an e-mail. It is something of a pain in the posterior to do, and I use it judiciously. I don’t see the point in digitally signing every email when a lot of it is just stupid stuff any way.

Three Final Rules from Bruce Schneier

We will finish this discussion with Bruce’s final three rules from Beyond Fear:

  • Risk Demystification: You need to take the time to understand what the actual risk is, and understand just how effective any proposed security countermeasure would be. There will always be a trade-off. If the risk is low, and countermeasure not particularly effective, why are you doing this? Saying “we must do everything in our power to prevent…” a risk that is unlikely, and where the countermeasures are not likely to work, is how you get to what Snowden revealed.
  • Secrecy Demystification: Secrecy is the enemy of security. Security can only happen when problems are discussed, not when discussions are forbidden. Secrecy will always break down at some point. This is the failure mode of Security by Obscurity. Most often, secrecy is used to cover up incompetence or malfeasance.
  • Agenda Demystification: People have agendas, and will often use security as an excuse for something that is not primarily a security measure. And emotions can lead people people to make irrational trade-offs.

Listen to the audio version of this post on Hacker Public Radio!

 Save as PDF