Paranoid Penguin

Linux Security Challenges 2010

Mick Bauer

Issue #189, January 2010

Security challenges and worries for 2010: we live in interesting times indeed!

In August 2005, I wrote a Paranoid Penguin column titled “The Future of Linux Security”, in which I described what I thought were the biggest challenges of Linux security in 2005 and the most promising new technologies for addressing them.

In that 2005 column, I suggested that virtualization might become a more important tool for isolating vulnerable applications and solutions than Mandatory Access Controls (MACs), such as SELinux and AppArmor. I also predicted that anomaly detection would become much more important than signature-matching, as the underlying engine behind most antivirus (AV) and intrusion detection/prevention systems (IDS/IPS).

So far, neither of those predictions has come to pass. We're still stuck with predominately signature-based AV and IDS/IPS technologies that are largely incapable of detecting “zero-day” malware that's too new for anyone to have yet created a corresponding signature or against polymorphic malware that alters itself from generation to generation.

Virtualization overwhelmingly has been driven by hardware resource management and other operational and economic concerns rather than security. In fact, virtualization, as most commonly deployed nowadays, is arguably a bigger source of security issues than it is a security tool (for example, for isolating vulnerable applications or services from other parts of a given system).

Am I embarrassed about those predictions not panning out? Not as much as I am disappointed. I still believe that AV and IDS/IPS need to evolve past signature-matching, and I still think virtualization has the potential to be a bigger part of security solutions than it is of security problems.

This month, more than five years since my last such overview, I'm devoting a column to my thoughts on what constitute some of the biggest Linux and Internet security challenges for 2010 and to my ideas on how we might address those challenges. This is by no means a comprehensive survey (time and space didn't permit me even to touch on mobile computing or embedded Linux, for example), but I think you'll agree that the issues I do cover represent some of the most far-reaching security challenges that affect not only the Linux community in particular, but also the Internet community at large.

Assets and Attackers

Before I zero in on specific technical areas, a quick word about the things we're defending and the people who are attacking them is in order, because those items have changed significantly since I started writing Paranoid Penguin. In the old days, we were concerned primarily with preserving network and system integrity against intruders whom we assumed were most likely to be bored suburban teenagers or industrial spies.

Governments, of course, worried about other types of spies, but I'm talking about civilian and corporate space (and generalizing heavily at that). The point being, the classic attack scenario involved people trying to remote-root compromise some Internet-facing system so they could deface your Web site, steal proprietary information or use that system as a platform for launching attacks on other systems, possibly including systems “deeper inside” your internal corporate network.

We still worry about that scenario, of course. But over the past decade, there has been an explosion in identity theft across a wide spectrum: simple e-mail-address harvesting for the purpose of spamming; stealing, trafficking in or illegally generating credit-card numbers for making fraudulent purchases; full-blown assumption of other people's names, social-security numbers (or other non-US identifiers), bank account numbers and so forth, for the purpose of fraudulently opening new credit accounts; laundering money gained in other criminal activity, and so on.

Sometimes identity theft is achieved through the old-school, console-intensive attacks of yore, against databases housing dense concentrations of such data. Much more commonly nowadays, it involves sophisticated malware that either infiltrates a given bank or merchant and works its way to its databases or harvests data at the client level, possibly even by capturing individual user's keystrokes.

Because spam, fraud and identity theft in general are so lucrative (amounting to billions of dollars annually), it should be no surprise that organized crime is behind a lot if not most of it. I'm speaking not only of traditional crime organizations that also run prostitution, illegal drug and gambling operations, but also completely new organizations focused solely on credit-card trafficking (“carding”) and other electronic crimes.

College students and teenagers still fit into the equation, but in many cases, they're working for scary people, for real money. The people writing the trojans, worms and viruses that do so much of the heavy lifting in these attacks are, in many cases, highly skilled programmers earning much more than the people who write anti-malware and firewall software!

This is our new security landscape. The situation is no more or less unwinnable than it was ten years ago, and sure enough, ecommerce and Internet traffic in general still are churning along more or less smoothly. But, we need to pay attention to these trends for that to continue to be the case.

So, how do these trends in the asset and attacker equation affect the defense equation?

Web Application Security and SSL/TLS Weaknesses

If Internet security is a war, Web applications surely constitute the front line. Nothing has changed the way we interact with computers, and the places from which we interact with them, like the World Wide Web—that is, the use of Web browsers to access data and even entire networks that are mediated by Web servers. In fact, the term World Wide Web is all but obsolete. The Web is now so ubiquitous, it's become synonymous with the Internet and even to some extent with user interface.

Web browsers now do things that used to be done by entire operating systems. Whereas the primary function of Web browsers used to be to format and display data correctly (Web pages originally being, in real terms, brochures with text, images and links to other pages), for some time now, we've used our browsers to download and execute code transparently. This code can be as simple as a script that takes form data from us, such as an on-line order form, and transmits it back to the server. Or, it can be as complex as an entire remote-desktop application that lets us control a computer on the other side of the world.

Most of the same things an attacker might attempt to subvert in an operating system, therefore, are now worth attempting against a Web browser. In the world of OS security, we worry about viruses—executables that end users might be tricked into running that change system behavior or steal data on the system. In the world of browser security, we worry about hostile Web sites—hostile Web content that can change browser behavior or steal data the browser is processing or has stored on the system.

And, that's just on the client side of the Web application equation! On the server side, we worry not only about hostile Web sites, but also about flaws in our own Web applications that might allow attackers to gain unauthorized access to back-end systems and data, or to attack other users.

What about SSL/TLS? Doesn't that provide a reliable means of cryptographically signing and verifying active content (code), authenticating transactions and preventing eavesdropping? Obviously, yes. It does so well enough for most of us to shop on-line, do on-line banking and so forth, with a reasonable level of safety and confidence. However, as I reported in my November 2009 DEFCON column, there has been a marked increase lately in man-in-the-middle attacks against SSL/TLS-protected transactions.

Some of these attacks exploit the ways commercially signed digital certificates are issued, maintained and verified by major issuers, such as VeriSign. That is, they exploit weaknesses in commercial public key infrastructures. Others exploit the ways Web servers and browsers handle SSL/TLS functions and the ways they alert (or don't alert) end users of suspicious behavior.

The good news is the actual cryptosystems on which SSL/TLS is built remain sound. Most of these problems stem from the way Web server and browser developers implement them (less so Web application developers) and the way large Certificate Authorities manage certificates. On the one hand, the server/browser development and PKI communities have their work cut out for them in figuring out how to keep SSL/TLS mechanisms transparent enough for ordinary users to accept and have success with, while fixing these new, serious security gaps. Even getting those communities to acknowledge their respective responsibilities and roles in fixing these issues is a big challenge, and it's not at all clear that they have done or will do so.

But, at least the suite of algorithms and other systems comprising TLS itself is sound. This is a solvable problem!

Cloud Computing

As Internet connectivity has gotten faster, cheaper and more ubiquitous, people have begun to question the need for relying on local computing power and storage, if so much of one's daily computing experience depends so heavily on Internet connectivity anyhow. If your major applications are all Internet applications, why not run them from over the Internet, on remote servers rather than your local CPU or on your local IT infrastructure?

Why not subscribe to a framework in which external providers host enormous farms of servers and storage arrays on which anybody can host virtual servers running massively multiuser on-line applications? Heck, why not just use applications written and maintained by the provider? Should end users even care where and how these applications are being run or who wrote them in the first place?

This is the promise of cloud computing—not just the major systems in your data center, but the data center itself—from the floor to which the racks are bolted upward to the applications running on top of it all—can become someone else's problem to maintain, for much cheaper than maintaining any of it yourself. All you need is a bunch of users with ordinary desktop systems (or Netbooks, for that matter) and Internet connectivity.

Maybe I've been a network engineer for too long, but I have a little trouble seeing the appeal of being completely dependent on network connectivity to do all my work. Even though I don't do much computing off-line nowadays, I certainly like to think I could. (I definitely would have written this article much faster without the distraction of an open browser window!)

My real problem with cloud computing, however, is the difficulty of protecting data that is not just flowing through, but being processed and stored by, applications owned and maintained by someone else on hardware and bandwidth completely outside my control. Frankly, I'm amazed that in an era when identity theft is the single-most quickly growing type of computer crime, any organization would be in such a big hurry to put such dense concentrations of its critical data in the hands of strangers.

Do I think cloud computing is a boondoggle? Not at all, but my attitude is the same as with IT outsourcing in general. It probably makes sense for certain applications used by certain organizations, but I think the whole concept is being grossly oversold, and I think people are overlooking substantial trade-offs and hidden costs, both operational- and security-related.

Malware

Malware has been with us a long time, and some of the things that scare us now, like polymorphic code that alters itself to thwart signature-based antivirus methods, actually have been around a while. What's changed recently is the emergence of “targeted malware”: worms, trojans and viruses designed to attack specific parts of specific target organizations.

Targeted malware is probably the scariest new threat that we as security professionals and system/network administrators are faced with. By definition, it's always “zero-day”. You never can hope your antivirus software provider has signatures for code that not only has never been released into the wild, but that also won't necessarily even function against anybody's network and systems but yours. Targeted malware almost is never written from scratch. In fact, it's frequently generated using sophisticated, slick “malware construction” software written by the aforementioned highly skilled, highly paid malware authors of the underworld.

But although you might think there's some potential for detecting common characteristics between hostile applications targeting different organizations but originating from the same development tools, these tools are in fact specifically designed to write code that evades detection. In fact, at a recent security conference, a forensics specialist whose presentation I attended commented that it's not uncommon for his team to fail to isolate fully the source of attacker activity on a compromised network beyond identifying infected systems. Much of the code he encounters nowadays is too deeply embedded into other applications, DLLs and even the kernel itself to be identified and isolated easily.

Equally scary is how it's propagated. You may think that firewalls, application proxies and other defenses on your network's perimeter should minimize the chance for worms to penetrate your internal systems in the first place. You may even be correct. But frequently, targeted malware is installed directly onto one or more internal systems at a target site by either a corrupted insider or a crook who's obtained a job at the target organization for the specific purpose of placing the malware.

It's already hard enough to ensure proper physical security, OS-level access controls and application-level authorization controls for systems that handle or store sensitive data. But to do so uniformly across all systems or local networks that merely interact with such systems, and may have been compromised by malware, is a much bigger problem.

Furthermore, even if the back end is well secured, what about targeted malware that harvests data from end users? Your customer service representatives who handle customer account information may be perfectly trustworthy, but what if their systems become infested with keystroke loggers that transmit customer information back to some criminal's servers, over an SSL-encrypted network stream that's nearly indistinguishable from ordinary Web surfing? It's easy to imagine scenarios in which data handled by your organization's end users might be harvested by bad guys, if they were able to achieve even a small foothold on even one system in your internal network.

Is the targeted malware threat unstoppable? To some extent, yes. In practical terms, it's a particular type of insider attack, and insider attacks can never be prevented completely. The good news is we already know how to manage insider threats: background checks, system/application/employee monitoring, granular access controls at all levels, good physical security and so forth. The more broadly and consistently we apply these varied, layered controls, the less likely it will be that even a given targeted attack can succeed, and the more limited the scope of damage it is likely to cause.

Like so much else in security, it's a game less about preventing attacks, than of increasing the cost and effort required for such an attack to succeed.

Virtualization

And, now we come to virtualization, which both on its own and in tandem with cloud computing is the focus of so much buzz and hype. Virtualization has unquestionably altered the way we think about computers. By making the notion of “computing hardware” almost completely abstract relative to operating systems and applications, virtualization can free us from certain types of physical and even geographical limitations, or more accurately, it can shift those limitations to a different part of the resource planning process.

Perhaps overly idealistically, I used to think virtualization could free us from the “winner take all” phenomenon in operating system security. On any system under attack, attackers frequently need to find only one vulnerability in one application to compromise the entire system completely. But what if the most vulnerable application on a given server is the only network listener on that system?

Suppose I need to run an SMTP relay using Sendmail, and I normally also would run a network time protocol (NTP) dæmon, the Secure Shell dæmon (sshd) and RealVNC on that same system. That's four different attack vectors on one system. But, what if I run Sendmail in its own virtual machine on that host, allowing access to it from the outside world, and for the four other dæmons running on the underlying host, accept connections only from the IP address of some internal access point?

Sure, I could achieve a similar thing without virtualization by using TCP Wrappers or a local iptables policy. But if all dæmons run on the same system, and attackers gain only a partial foothold via Sendmail, perhaps resulting in nonroot remote access, the attackers may be able to attack one or more of the three other dæmons to attempt to escalate their privileges to root. But, if those dæmons are running on the virtual Sendmail machine's host system, and configured to reject connection attempts from the Sendmail virtual machine, that second attack will fail.

Unless, that is, our assumptions about virtualization don't hold. This brings me to the dark underbelly of virtualization, which in our headlong rush to maximize hardware resource utilization, I fear may not be under close enough inspection.

We assume that one virtual machine can't see or gain access to the resources (disk space, memory and so on) used by other virtual machines running on the same host. Virtual machines are supposed to be isolated by, among other things, a hypervisor or monitor program. We also assume that it isn't feasible or possible for any userspace application running on a guest virtual machine to speak directly to any process or resource on the underlying host.

If you write hypervisor code, there are strong incentives for you to maintain these assumptions and write a secure hypervisor. Pretty much anything that can subvert hypervisor security will have a negative impact on system performance, availability and overall reliability. For example, a bug that allows one virtual machine to access another's memory, while potentially calamitous if discovered by an attacker, is at least as likely to result in one virtual machine's impairing another's performance by unintentionally overwriting its memory.

But recent history has shown that both theoretical and demonstrable attacks are possible against popular system virtualization environments, such as VMware (see the link to Michael Kemp's presentation, in Resources).

Does this mean we shouldn't use virtualization? Of course not. This is a powerful and useful technology. But it's also very new, at least in many contexts in which we're deploying it nowadays, and until hypervisor security is better understood and more mature, I do think we should be careful about which virtual machines we run on the same host. It seems prudent to me to colocate only systems handling similar data and representing similar levels of risk (for example, Internet-reachability) on the same host system.

In other words, we probably shouldn't rely on hypervisors to protect virtual machines from each other, more than we have to.

Conclusion

The explosive proliferation of new types of Web applications, cloud computing services and virtualization solutions are exposing our data, systems and networks in ever-bigger, ever-further-reaching ways. Targeted malware, man-in-the-middle attack techniques and similar threats against SSL/TLS, the involvement of organized identity theft rings, and other nasty trends on the attack side of the equation only make it harder for those of us concerned with security to protect these emerging applications, services and infrastructures.

But what is a crisis, if not a job for experts? Interesting times call for creative, technology-obsessed types like Linux Journal's readers (and columnists), and I have no doubt that we, the geek community, are amply up to the challenge. So, here's wishing you a safe, productive and interesting (in the good sense, not just the scary sense) 2010!

Mick Bauer (darth.elmo@wiremonkeys.org) is Network Security Architect for one of the US's largest banks. He is the author of the O'Reilly book Linux Server Security, 2nd edition (formerly called Building Secure Servers With Linux), an occasional presenter at information security conferences and composer of the “Network Engineering Polka”.