SD SECURE START
In this Issue:
>>ATTENTION, BLOGGERS!
Did you miss out on SD West? Curious about opinions overheard, techniques taught, and the tools and products demoed? Check out SD's blog and read Christian Gross, Elliotte Rusty Harrold, David Dossot, JP Morgenthal, Paul Tyma, Rick Wayne, Scott W. Ambler, the Braidy Tester and Alexandra Weber Morales's impressions of this year's classes, keynotes and news.
>>PRINCIPLE #9: TRUST TENTATIVELY
It's easy to be shortsighted about our own code.
People commonly hide secrets in client code, assuming they'll be safe, but this is a foolish gamble: Talented end users will be able to abuse the client and steal its secrets. Instead of making assumptions, you should be reluctant to trust. Servers should be designed not to trust clients, and vice versa, since both clients and servers get hacked. A reluctance to trust can help with compartmentalization, and can thus shore up your software's overall security.
While shrink-wrapped software can help keep designs and implementations simple, can any off-the-shelf component be trusted to be secure? Ironically, many developers, architects and managers in the security tool business don't know very much about writing secure code, and each year, vendors offer scores of products with gaping security holes. Many security products introduce more risk than they address. An excellent example is the flawed buffer overflow protection mechanism that Microsoft built into the Visual C++ .NET compiler in early 2002. Social engineering attacks are also easy to launch against unsuspecting customer support agents, who have a proclivity to trust since it makes their jobs easier.
"Following the herd" presents similar problems. Just because a security feature is an emerging standard doesn't mean it makes sense—and even if competitors don't follow good security practices, you should still institute them. For example, we often hear people deny the necessity of encrypting sensitive data because their competitors aren't encrypting their data. This argument holds up only as long as their customers aren't hacked.
All too often, security vendors promulgate suspect data to sell products. Several common warning signs can help to alert you to this practice. One of our favorites is the advertising of "million-bit keys" for a secret-key encryption algorithm. Mathematics tells us that 256 bits is a big enough symmetric key to protect any number of messages, assuming the algorithm using the key is of high quality. People advertising more know too little about the theory of cryptography to sell worthwhile security products. Before buying, do lots of research—and use a little common sense.
Sometimes, it's prudent not to trust even yourself. While everyone wants to be perfect, it's wise to get an outside opinion.
Remember, trust is transitive. Once you dole out some of it, you're implicitly extending it to anyone the trusted entity may trust. For this reason, trusted programs should never invoke untrusted programs. Be careful; trust can come back to bite you.
—Gary McGraw and John Viega
>>PRINCIPLES TO BUILD BY
1. Secure the weakest link
2. Practice defense in depth
3. Fail securely
4. Allow least privilege
5. Compartmentalize
6. Keep it simple
7. Promote privacy
8. Know that hiding is hard
==> 9. Trust reluctantly
10. Use community resources
—GM and JV
>>WHAT GOES WRONG
Developers like to know what's going on in their programs, especially when a program does something unforeseen, like cause an error. For this reason, some coders tend to put diagnostic information about errors directly in error messages displayed to the user. Attackers can and will use this information to exploit your code. In fact, clever attackers combine trusted input risks with blabbermouth error-reporting to exploit software. Put diagnostic information that could be used in an exploit somewhere safe (like an error log file), instead.
—GM and JV
These articles originally appeared in the August 2003 issue of Software Development. Read more at http://www.sdmagazine.com/documents/sdm0308g/