Point/Counterpoint

/opt vs. /usr/local

Kyle Rankin

Bill Childers

Issue #191, March 2010

Should a sysadmin put additional software in /usr/local or /opt? Bill and Kyle argue the one true location for third-party software.

This month, Bill and I take on one of the classic holy wars between Linux geeks: /opt vs. /usr/local. If you look at the current Linux Filesystem Hierarchy Standard, you will see that both /opt and /usr/local are represented there. If you read further, you will see that they both seem to have a similar purpose: a place to put software that's not part of the standard system. Even though both directories are designed for similar purposes, they each go about that task in different ways. Well, there's nothing quite like two very similar things with subtle nit-picky differences to spark a debate between geeks.

Bill: So what's wrong with /opt?

Kyle: There was nothing wrong with /opt, back when it was set up. You know, back in the days when tar was your package manager and dinosaurs roamed the earth.

Bill: “Back when it was set up.” Oh, here we go again, another “Bill's older than dirt” comment.

Kyle: I'm not saying you're older than dirt, I'm just saying I've seen your yearbook pictures with dirt. Anyway, back then, it made sense for someone to package everything up in one big directory and dump it under /opt. But some time in the last two decades, most Linux distributions started using more sophisticated package managers.

Bill: Ahem. I rather like using /opt. It's nice having a distinct delineation as to what's installed by the distribution and what's installed by the admins after the fact.

Kyle: Wow, I totally agree with half of that statement.

Bill: Hey, there's a first. And it's in print too. Whoohoo!

Kyle: It is nice having a distinct delineation of what's part of the distribution and what the admin installs—for me, it's just in /usr/local.

Bill: This is the first time I've heard you, of all people, advocate more typing.

Kyle: Your system packages can dump their files in /usr, and any third-party packages can put things in an identical directory structure under /usr/local; however, because these days we aren't using tar, but programs like rpm and dpkg to install packages (and their yum and apt front ends), we have a much more sophisticated way to see what is installed and where it's installed, beyond just the ls command. Even then, using ls, I can see that a particular binary is in /usr/local/bin and, therefore, must be part of a third-party package.

Bill: I may be arguing semantics here, but that's what Point/Counterpoint is about. To me, /opt is for “options”—stuff that's post-install. /usr/local implies that its local to that machine. To me. Your “ls” point also applies to /opt, except the path is shorter, and you can't assume that everyone will be using rpm and dpkg.

Kyle: The path is shorter, eh? Well Bill, thanks for the setup.

Bill: What if you compile things from source, and don't want to go through the added steps of making a .deb? The bottom line is that there is no real “standard”. All the admins I've seen tend to have their own spin on this.

Kyle: Once you start using /opt, you can count on your system paths increasing exponentially. With /usr/local, my library paths and my binary paths need to add only one entry (an entry that probably is already there).

Bill: Exponential? Only if you're installing a crazy amount of software, man. I rather like knowing that if I'm going to be building, say, a Java application server, that my JDK is always in /opt/jdk (I typically have a symlink that points to the real JDK, like /opt/jdk_sun_1.6.0.17. That way, JAVA_HOME is always /opt/jdk. Any other packages, say a custom-compiled apache, can live in /opt/apache.

Kyle: But if you installed the JDK in /usr/local (not that Sun would ever approve), you could have all the libraries in /usr/local/lib and Java binaries in /usr/local/bin, and you could just use your regular package manager to see where things are installed.

Bill: That's only a couple paths. You're assuming that these things are packaged by the software maintainer or that I want to go through with making packages. Lots of times software's not packaged.

Kyle: It's an extra (and in my opinion, proper) step when you are deploying software to your servers, but it's a step that makes sure you can automatically handle dependencies and can use standard tools and not tar, cp and rm to add and remove packages.

Bill: Whoa, you're calling tar and cp “not standard tools”?

Kyle: Standard packaging tools. Let's take your apache example. If I wanted a custom apache, I'm going to have to compile it, right? All I have to do is use the --prefix option to change where it is installed from /usr to /usr/local. In my opinion, that's easier than the /opt approach.

Bill: It's rather nice to be able to take a completely working server and just rsync its directory to another box.

Kyle: Again, I suppose if you are a closet Solaris or Slackware admin, tar, cp and rm are your packaging tools, but if your add-on programs are in packages, you can just use a standard package manager.

Bill: Yes, if there's a package for it, or you want to go through the work of making one.

Kyle: That's what I think this argument ultimately comes down to: the traditional (and ancient) approach to install software before proper package managers came on the scene versus the modern way to deploy software on a server.

Bill: There are times when packaging is appropriate. If you've got lots of time to prepare, or require a lot of validation and control, then sure, package away.

Kyle: The modern way is to use package managers, so dependencies are easily managed—adding, removing and updating packages is managed, and there's an audit trail. The traditional way is just to untar or copy files around and hope they work. Plus, with the traditional way, you tie up extra space by sharing fewer libraries and just bundling everything together in each package, even if another package might use the same libraries. The work of doing it “right” is work up front that saves you a lot of work down the road. I think what it comes down to is that Bill has a soft spot for /opt from all his years at Sun.

Bill: Hey, just because I feel nostalgic when I see an /opt/SUNW type path doesn't mean I don't have a point. On a development system, or a sandbox, having an /opt directory where you can just toss things and see if they work makes a whole lot of sense. I know I'm not going to go through the effort of packaging things myself to try them out. If the app doesn't work out, you can simply rm the /opt/mytestapp directory and that application is history. Packaging may make sense when you're running a large deployment (there are times when I do package apps), but lots of times, it's nice to toss stuff in /opt.

Kyle Rankin is a Systems Architect in the San Francisco Bay Area and the author of a number of books, including The Official Ubuntu Server Book, Knoppix Hacks and Ubuntu Hacks. He is currently the president of the North Bay Linux Users' Group.

Bill Childers is an IT Manager in Silicon Valley, where he lives with his wife and two children. He enjoys Linux far too much, and he probably should get more sun from time to time. In his spare time, he does work with the Gilroy Garlic Festival, but he does not smell like garlic.