By Kurt Seifried, [email protected]
In a previous article I covered the basics of format string attacks. This time I've interviewed Chris Evans, whom I quoted in the last article. Without further ado, here is the interview. Read it - you will learn something. (I did.)
Q: | It appears to me that these format strings have been present a very long time. A CERT advisory mentioned them being in WuFTPD since 1993. Do you think attackers have known about them and been using them? (This certainly would be a convenient explanation for many mysterious unsolved break-ins.) |
A: | This is a very interesting question. It
depends what you mean by "attackers." I doubt
this problem was widely known in the underground cracker
community. When that is the case, the exploit usually
leaks to the public. I can happily entertain that a few
highly skilled individuals knew about this issue, though.
Finally, we should be wary of attributing any unsolved
break-ins to format string bugs. Even if a compromised
site was running daemons containing format string bugs,
there is still the potential for undiscovered security
bugs which are not of a format string nature. As we know, string/buffer handling has traditionally been very buggy. The most obvious example of buggy buffer handling is the classic buffer overflow. The problem is that programmers tend to just treat a string or buffer as a "chunk of memory." This gives the programmer tedious chores such as:
All the calculations and checks involved in some of the above ways of coding are easy to get wrong. Also, code readability is reduced. It can be difficult to analyze the logic of a program because it is buried in buffer parsing and management. The solution is break the association between string/buffer, and "chunk of memory." Instead of manipulating and copying things all over the place, we can manipulate an opaque object which just happens to internally manage a buffer. Suddenly, instead of: { char buf[BUFSIZE]; strcpy(buf, data); strcat(buf, more_data); } The former case is easy to get wrong. The latter case introduces an API which is very hard to use in an insecure manner. That is the key thing. The safe_buf thing in fact boils down to little more than a C version of a classic C++ string or buffer class. You can happily push some string parsing into the safe_buf code too, e.g. string splitting, string substitutions, etc. It is my understanding that the highly secure mail server "qmail" employs this technique for secure string handling (although I haven't looked at the source to confirm). Have you seen a string/buffer handling bug in qmail lately ;-) |
Q: | Like buffer overflows, bugs are due to programmers. Do you think programmers will ever break themselves of these habits/mistakes? Or are development tools the answer, like source code scanners such as ITS4? |
A: | Broadly, I think highly secure code
requires two things.
Unfortunately, even the best and most security-aware programmers are only human, and make mistakes. High(er) quality implementations can be achieved with a number of measures;
Once we have taken steps to maximize the quality of code, there may unfortunately still be problems left. The recent "rpc.statd" exploit illustrates this. rpc.statd is a well audited piece of code, but a format string bug slipped through because at the times of the audits, no-one was looking for them. This is where a fault tolerant design comes in. A fault tolerant design essentially minimizes the degree of privilege that bugs are able to give attackers. Obviously, one way this can be achieved under UNIX is by running as little code as possible as root. Other tools available to UNIX programmers sometimes include chroot()/jailing facilities and capabilities. Unfortunately, most code isn't fault tolerant. A lot of daemons and services just permanently keep high levels of privilege. Not because they need it, but because it makes coding easier. It takes a non-trivial amount of effort to factor out parts of a program which genuinely _need_ privilege. Take something like the OpenSSH server. It has a good quality of implementation, largely because its security critical nature has inspired many audits. However, if a security hole is ever found in OpenSSH, its severity is likely to be a full machine compromise. That really need not be the case. In summary, a way to ensure security despite the presence of the always fallible programmer, is through auditing and fault tolerant design requirements. |
Q: | I'm curious to know if you have seen SubDomain (from WireX). If so, what do you think of it? |
A: | I have heard of SubDomain, from the same
people, I expect this might be the same thing as CoDomain
(http://immunix.org/products.html
gives some blurb). I don't know all the gory details, but SubDomain is another application of the "principle of least privilege," giving much better system resilience in the presence of security holes you don't yet know about. The SubDomain documentation, at the above URL, mentions enhanced file access control a lot. Hopefully, not just files will be protected, but also network access and process access. To generalize, the entire kernel API could use some decent protection. I like to visualize this protection as a "syscall firewall." The existence of multiple projects with similar goals (SubDomain, Janus) shows a frustration that chroot() and, more recently, Linux kernel capabilities, are a step in the right direction, but simply not fine grained enough to apply the principle of least privilege properly. |
Q: | Do you think that like buffer overflows and format strings, there are more of these "fundamental" problems lying around, waiting to be found? |
A: | In the security world, we have to assume
that yes, there are more undiscovered fundamental
problems lying around. It would be foolish and
short-sighted not to. If we keep this threat in mind while writing code, it should help lead to fault tolerant solutions. The "remote-root" severity of the recent holes in WuFTPD, BSD-ftpd, rpc.statd and, probably, LPRng, simply would not have occurred in the presence of code with a good design from a security point of view. |
Q: | You mention fault tolerance several times. What do you think of things such as Linux's kernel capabilities, The Openwall kernel patch and StackGuard from WireX? Are they an answer, or do we need more? |
A: | I'll take these three things separately. Linux kernel capabilities - these are a useful tool for reducing the impact of a vulnerability in a program or daemon. A superb example is a network time daemon. Why should it run with full privilege when the single privilege "change the system clock time" is required? Capabilities solve this nicely. Unfortunately, for some other uses, they are not really fine grained enough. Two quick examples are
Openwall kernel patch - an interesting collection of patches. Some of these are geared towards making certain types of exploit harder or impossible, which is good. Classic security hardening. By far the most discussed component of this patchset, though, is the "non-exec stack" patch. This is essentially a defense against your traditional stack buffer overflow. Unfortunately, stack overflows are still exploitable with this patch in place. It just requires a different exploit. People concentrate on why this is bad, i.e. you are just as exploitable with or without this patch. However, another important facet is that your average script kiddie will be foiled by this patch, and move onto another target which is not as well protected (obfuscated, some might say). This will remain true for as long as the number of installations without this protection is relatively large. If this protection because commonplace, exploits would simply be released targeted at systems with the non-exec stack patch in place. StackGuard - another solution to stack based buffer overflows, but one which will make some stack overflow bugs unexploitable. This is a very useful layer of security to add. If your extensive code audit were to miss a stack overflow (and this happens), then you may still not be exploitable. The direction I'd like to see things take, is better OS support for applications to describe the precise set of privilege they need. If done properly, most applications would run with sufficiently low privilege that a compromise would be only a fraction as serious as one is today. Things like StackGuard have a place too - namely targeting a common flaw and preventing it being a problem. In an ideal world, we would just nail the problem in the first place. However, this is not an ideal world, and some instances of "problem" get missed during auditing. |
Q: | I have seen other projects in progress. One that intrigued me was the ability to assign a port to a user/group (a la file ownership) so that, say, "dnssrvr" could use port 53 and not ever need to touch root. If you had a wish list of such items, what would the top three be? |
A: | Nice question :)
|
Last updated on 3/30/2002
Copyright Kurt Seifried 2002