New opportunities for game developers are provided by several tools.
Over the past year, there has been a drastic change in how Linux is viewed by gamers. No longer relegated to the role of just a dedicated server, gamers are using Linux in increasing numbers as their primary OS and as their gaming platform. And as if using Linux wasn't a challenge enough in itself, anyone trying to set up a Linux box to play a 3-D game will find the path fraught with a plethora of confusing acronyms and names. From OpenGL to DRI to DGA, each term refers to a particular part of the full scene of Linux graphics. Here we put those pieces together for our readers to give them a broader view of what Linux has to offer now and what it will offer, in the near future.
We start by looking at OpenGL, a name that's gotten a lot of use lately, especially in reference to games and professional design applications. Originally called IRIS GL, OpenGL (Open Graphics Library) is a programming library that provides a rich array of graphics functions, both 2-D and 3-D, allowing the programmer to represent any object they design on the screen. It was developed by Silicon Graphics (SGI) and has become a standard graphics application programming interface (API) on many platforms including UNIX, Linux, Microsoft Windows and Apple Macintosh. While OpenGL is the standard for high-end graphics applications where programmers have found it powerful and easy to use, most people have heard of OpenGL in reference to the Quake series of games from id Software. It has been used as part of the graphics renderer since Quake 1, where it has shown itself to be a powerful game graphics API as well. The standard for OpenGL is an open one, independent of hardware platforms, windowing systems and operating systems. Thus the word “Open” as part of the name.
In order for OpenGL to be used in an OS, someone must create a library to implement the function calls OpenGL programs make. However, to call your implementation by the name “OpenGL”, you must actually obtain a license from SGI (or Microsoft). In fact, getting a license will earn you a package of code, called a Sample Implementation and written in C, from which you can build your OpenGL library on your platform. But since the standard for OpenGL is an open one, working from the SI isn't required. In fact, that's what Mesa is all about: it is an implementation created from scratch without a license from SGI and without any code from SGI. To use the OpenGL trademark, however, you must get a license from SGI and your implementation must be robust enough to pass a set of conformance tests developed by the OpenGL Architecture Review Board (ARB). The ARB historically consisted primarily of high-end hardware manufacturers and SGI itself, but now includes some game hardware manufacturers. It is worth noting that, while SGI created OpenGL, the future of the API is controlled by the ARB where SGI sits as one of many members.
Recently, SGI released a Sample Implementation (SI) that can be used to implement the API on any hardware platform with an appropriate compiler. This source code is very similar to the one sold to hardware vendors that implement OpenGL drivers for their video cards. While it has been said that the SI is now open-source software, this isn't entirely true. As of this writing, there are still some issues with the license attached to the SI that prevent it from being truly open.
Many people associate OpenGL with hardware acceleration. While it is true that OpenGL runs very quickly when it has hardware assistance, that acceleration is not required to run an OpenGL application. With a software OpenGL library, OpenGL applications can run and render the same image as they would with hardware acceleration, albeit significantly slower unless the program is very simple. When hardware acceleration is available (in the form of a 3-D graphics card and a driver written for that card), OpenGL applications can run very quickly and smoothly, since most of the intense operations have been offloaded from the CPU to the dedicated graphics board. However, not every accelerator has the ability to perform all the features of OpenGL. When a feature is not supported in hardware, the library may fall back to a software implementation which uses your CPU. For example, very few consumer-class video cards will do transform and lighting (T&L), so normal triangle setup is often done partially in hardware and the rest in software.
Since its initial creation, OpenGL has undergone several revisions and now stands at version 1.2.
In Linux, the implementation of OpenGL used most often is Mesa, created by Brian Paul.
Mesa is an unlicensed implementation of the OpenGL standard that is available on several platforms, including Linux, Windows and Macintosh. Originally started in 1995, it has been under development ever since by a team of developers led by Brian Paul. Mesa is freely available and open source, so anyone can work with the source code and port it to another operating system. For the most part, OpenGL applications can compile and run against a Mesa library just as they would against a licensed OpenGL library.
Provisions for hardware acceleration are incorporated into Mesa. For example, owners of a 3dfx card may choose to download and install the Glide SDK from 3dfx and then recompile Mesa from its source code. When Mesa configures itself for compilation, it should detect the installed Glide headers and libraries and consequently add the necessary code to allow the 3dfx card to accelerate many of the OpenGL functions (via Glide 2.x).
Like OpenGL, Mesa has undergone several revisions. As of Mesa 2.x, the OpenGL 1.1 standard has been supported. The later Mesa 3.x library is an implementation of the OpenGL 1.2 standard, and thus should be nearly as current as OpenGL itself. Mesa also includes support for GLUT and GLU.
So now we have OpenGL, a programming interface for creating 3-D graphics, and an open-source implementation called Mesa. The next part of the puzzle is the glue that joins OpenGL and the X Window system.
As OpenGL is platform- and system-independent, it is also window-system-independent. Thus, it needs a window system binding to allow it to interact with the window system. This binding provides the functionality for actions like finding the location of a window on the screen or how to process input. In the case of UNIX and Linux systems, that's GLX, a library that allows OpenGL and X to operate together. (In the case of Microsoft Windows, it's called WGL.) That is, with GLX, it is possible to have OpenGL utilize an X window for its output. Even when you're using Mesa (full-screen or non-DRI; we'll explain DRI in just a moment), there's a fake GLX implementation to make the system think it is running under the normal window system binding. The GLX currently used in Linux is based on the source code released by SGI in February, 1999.
Some of you might even have run across the term Utah-GLX. What is Utah-GLX, and what part does it play in this puzzle?
Utah-GLX is a project to add OpenGL capabilities to some current video accelerators, like the Matrox G400/G200 cards and the ATI Rage Pro and Rage 128 cards, while still using XFree86 3.3.x. The Utah-GLX driver uses indirect rendering (and in some cases, a form of direct rendering) to provide this kind of acceleration. However, as will be explained shortly, this rendering incurs a performance penalty that prevents the hardware from reaching its full potential.
Among other accomplishments, the Utah-GLX project has led to the first hardware acceleration on the Linux PPC platform as well as the first hardware acceleration for a laptop. In both of these cases, the video card is the ATI Rage Pro.
Now that XFree86 4.0 has been released, the hope is that much of the work in the Utah-GLX project can be ported to the DRI. While there has been some talk of starting this move, as of this writing it hasn't happened yet.
This talk of indirect and direct rendering naturally leaves some unanswered questions. Let's take a closer look and see how the two differ and where they are used.
The difference between indirect and direct rendering is the number of levels through which the data must pass. That is, how much the data must be massaged before it is actually put in the frame buffer of the video card to be displayed on a monitor. As one would expect, the fewer the levels, the faster the image, and thus the emphasis on direct rendering (via DRI) as part of XFree86 4.0.
When indirect rendering is used, data is copied from the application issuing the graphics output to the X server, and from there to the hardware. This incurs a performance penalty, since the output from the application must be packaged into a form for the X server and then, once X has done its job, the final output must be packaged appropriately and sent to the hardware. In a normal 2-D application, the speed of this process is adequate. However, in today's CPU and memory-intensive 3-D applications, that overhead is too unwieldy for adequate performance.
Direct rendering attempts to streamline this flow of data and allows the application to access the hardware more directly. That is, it allows an application to issue its drawing commands directly to the graphics hardware, with only the minimum amount of necessary intervention by the X server. This ability exists in XFree86 as the Direct Rendering Infrastructure (DRI), developed by Precision Insight.
Finally, we get to talk about this mysterious DRI that has come up in our discussion a couple of times. So without further ado, read on to learn more about DRI and what it means for Linux users.
The Direct Rendering Infrastructure (DRI) is a technology developed by Precision Insight to add a direct rendering ability to an X server. In more common terms, DRI allows applications to safely and efficiently access the hardware directly while cooperating with the X server. Unlike Utah-GLX, it doesn't require privileged users or privileged programs. Furthermore, the work takes place in the application instead of in the X server.
More technically, one can think of DRI as three parts: a special X server, a direct rendering client and a kernel-level device driver. The special X server for Linux is XFree86 4.0, whose code contains special DRI modifications. The direct rendering client consists of several smaller components. The most important of these are an OpenGL implementation (in most cases Mesa), GLX as developed by PI from SGI's code, and driver libraries from PI that are Linux-specific (but hardware-independent). The kernel-level device driver is a module that abstracts direct memory access for hardware. It is designed in such a way that only the minimum of kernel modification should be needed in the future.
In short, DRI is hardware acceleration “done right”. Since it doesn't require special programs or privileges, there is an inherent level of security which current hardware-acceleration implementations may not have. Also, it is part of the XFree86 server and has undergone a significant amount of testing. With such a development system, the hope is that users can count on the usual stability associated with XFree86. Finally, the direct access to hardware gives Linux and XFree86 the chance to act as a solid, fast platform for professional graphics applications and demanding 3-D games.
In addition to DRI, OpenGL, Mesa and the other terms, there are some smaller parts of the picture that you, faithful reader, are likely to have come across. Let's take a minute to go over each of them briefly.
GLUT began as a library called aux developed by Mark Kilgard (formerly of SGI, now at NVIDIA) to make programming examples easier in his book OpenGL Programming for the X Window System. This library was eventually improved and has matured into GLUT.
DGA is the direct graphics architecture. It is a very simple extension to the X server that allows applications to have direct access to the frame buffer. Users and applications need to have privileges to use DGA. With those permissions, they can get access to the frame buffer memory and draw, unaccelerated, on the screen. With the release of XFree86 4.0, DGA 2.0 will be included, bringing some hardware acceleration to basic 2-D drawing functions. In addition to the graphics abilities, DGA allows an application to read peripherals like the mouse and keyboard more directly. Normally, signals from those input devices pass through the X server before they can be used by an application.
VidMode—the vidmode extension allows an application to query the X server for the resolutions it supports and tell the X server to change to a specific resolution. This is often used in games that want to run at a specific size when your desktop is set to a different resolution.
Glide is the 3-D API developed by 3dfx for their Voodoo line of cards to provide fast, full-screen hardware acceleration. Its functionality was designed to match that of the Voodoo hardware 3dfx had created. Many early 3-D accelerated games, like Tomb Raider, used Glide. With the widespread acceptance of OpenGL for both productivity and entertainment applications, the importance of Glide has diminished. Consequently, 3dfx opted to release the source code and specifications to the public, a move which allowed its spread to other platforms.
Let's review: OpenGL is a library that facilitates the creation of 3-D graphics. Under Linux, there is an open-source library called Mesa which provides OpenGL compatibility. We have GLX to provide a standard way for OpenGL applications to work with X. For people still using XFree86 3.3.x and certain video cards, Utah-GLX provides some OpenGL functionality similar to the type that XFree86 4.0 and DRI provide. With XFree86 4.0, however, users get maximized 2-D and 3-D performance, better integration and more secure operation.
So there you have it. We can see how all the parts fit together and their dependencies upon one another.