Hack and /

Simple Server Hardening, Part II

Kyle Rankin

Issue #271, November 2016

Expand past specific hardening steps into more general practices you can apply to any environment.

In my last article, I talked about the classic, complicated approach to server hardening you typically will find in many hardening documents and countered it with some specific, simple hardening steps that are much more effective and take a only few minutes. While discussing how best to harden SSH and sudo can be useful, in a real infrastructure, you also have any number of other services you rely on and also want to harden.

So instead of choosing specific databases, application servers or web servers, in this follow-up article, I'm going to extend the topic of simple hardening past specific services and talk about more general approaches to hardening that you can apply to software you already have running as well as to your infrastructure as a whole. I start with some general security best practices, then talk about some things to avoid and finally finish up with looking at some areas where sysadmin and security best practices combine.

General Best Practices

I won't dwell too long on general security best practices, because I've discussed them in other articles in the past, and you likely have heard of them before. That said, it's still worth mentioning a few things as these are the principles you should apply when you evaluate what practices to put in place and which to avoid. As someone who likes running a tight ship when it comes to systems administration, it's nice that security best practices often correspond with general best practices. In both cases, you generally can't cut corners, and shortcuts have a tendency to bite you later on.

The first security best practice worth covering is the principle of least privilege. This principle states that people should have the minimum level of privileges to a system that they need and no more than that. So for instance, if you don't need to grant all engineers in your organization sudo root privileges on your servers, you shouldn't. Instead, just give them privileges to perform the tasks they need. If some classes of engineers don't really need accounts at all, it's better not to create accounts for them. Some environments are even able to get by without any developer accounts in production.

The simpler a system, the easier it should be to secure. Complexity not only makes troubleshooting more difficult, it also makes security difficult as you try to think through all of the different attack scenarios and ways to prevent them. Along with that simplicity, you should add layers of defense and not rely on any individual security measure. For instance, traditionally organizations would harden a network by adding a firewall in front of everything and call it a day. These days, security experts advise that the internal network also should be treated as a threat. Sometimes attackers can bypass a security measure due to a security bug, so if you have layers of defense, they may get past one security measure but then have to deal with another.

On the subject of security bugs, keeping the software on your systems patched for security bugs is now more important than ever. The time between a security bug's discovery and being exploited actively on the internet keeps shrinking, so if you don't already have a system in place that makes upgrading software throughout your environment quick and easy, you should invest in it.

Finally, you should encrypt as much as possible. Encrypt data at rest via encrypted filesystems. Encrypt network traffic between systems. And when possible, encrypt secrets as they are stored on disk.

What to Avoid

Along with best practices, some security practices are best avoided. The first is security by obscurity. This means securing something merely by hiding it instead of hardening it. Obscurity should be avoided because it doesn't actually stop an attack; it just makes something harder to find and can give you a false sense of security.

A great example of this is the practice of moving your SSH port from the default (22) to something more obscure. Although moving SSH to port 60022 might lower the number of brute-force attempts in your logs, if you have a weak password, any halfway-decent attacker will be able to find your SSH port with a port scan and service discovery and be able to get in.

Port knocking (the practice of requiring a service to access random ports on the server in a sequence before the firewall allows the client through—think of it like a combination lock using ports) also falls into this category, because any router between the client and server can see what port the client uses—they aren't a secret—but will give you a false sense of security that your service is firewalled off from attack. If you are that concerned about SSH brute-force attacks, just follow my hardening steps from the first part of this series in the October 2016 issue of LJ to eliminate it as an attack completely.

Many of the other practices to avoid are essentially the opposites of the best practices. You should avoid complexity whenever possible, and avoid reliance on any individual security measure (they all end up having a security bug or failing eventually). In particular, when choosing network software, you should avoid software that doesn't support encrypted communication. I treat network software that doesn't support encryption in this day and age as a sign that it's still a bit too immature for production use.

Where Sysadmin and Security Best Practices Collide

Earlier, I mentioned that general best practices and security best practices often are the same, and this first tip is a great example. Centralized configuration management tools like Puppet, Chef and SaltStack are tools systems administrators have used for quite some time to make it easier to deploy configuration files and other changes throughout their infrastructure. It turns out that configuration management also makes hardening simpler, because you can define your gold standard, hardened configuration files and have them enforced throughout your environment with ease.

For instance, if you use configuration management to control your web server configuration, you can define the set of approved, secure, modern TLS cipher suites and deploy them to all of your servers. If down the road one of those ciphers proves to be insecure, you can make the change in one place and know that it will go out to all relevant servers in your environment.

Another best practice with configuration management is checking your configuration management configuration files into a source control system like git. This “configuration as code” approach has all kinds of benefits for systems administrators, including the ability to roll back mistakes and the benefit of peer review. From a security standpoint, it also provides a nice auditing trail of all changes in your environment—especially if you make a point to change your systems only through configuration management.

Along with configuration management, another DevOps tool that also greatly aids security is an orchestration tool—whether it's MCollective, Ansible or an SSH for loop. Orchestration tools make it easy to launch commands from a central location that apply to particular hosts in your environment in a specific order and often are used to stage software updates. This ease of deploying software also provides a great security benefit because it's very important to stay up to date on security patches.

With an orchestration tool like MCollective, for instance, if you find out there's been a new bug in ImageMagick, you can get a report of the ImageMagick versions in your environment with one command, and with another command, you can update all of them. Regular security updates become simpler, which means you are more likely to stay up to date on them, and more involved security updates (like kernel updates that require a reboot) at least become more manageable, and you can use the orchestration software to tell for sure when all systems are patched.

Finally, set up some sort of centralized logging system. Although you can get really far with grep, it just doesn't scale when you have a large number of hosts generating a large number of logs. Centralized logging systems like Splunk and ELK (Elasticsearch, Logstash and Kibana) allow you to collect your logs in central place, index them and then search through them quickly. This provides great benefits to general sysadmin troubleshooting, but from a security perspective, centralized logging means attackers who compromise a system now have a much harder time covering up their tracks—they now have to compromise the logging systems too. With all of your logs searchable in one place, you also get the ability to build queries that highlight (and with many systems graph as well) suspicious log events for review.

Kyle Rankin is a Sr. Systems Administrator in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.