At the Forge

Incremental Form Submission

Reuven M. Lerner

Issue #162, October 2007

A creative solution for solving Web service performance bottlenecks, or “ix-nay on the ottleneck-bay”.

Computers are amazingly fast. Think about it—we measure raw processor speed by how many instructions it can execute each second, and that number has gotten so large, we round off to the nearest hundred million.

Of course, it's often hard to feel that computers are all that speedy, especially when you're sitting around waiting for them to compete a task. Sometimes that wait has to do with complex algorithms that take a while to execute. But in many cases, the problem is a delay further down in the system, which is causing your end-user application to wait for a while.

This, I believe, is the major Achilles' heel of the world of Web services—Web-based APIs that are making it increasingly easy to combine (and manipulate) data from multiple sources. Web services may be revolutionizing distributed application development and deployment, but they make it tempting (and too easy, sometimes) to create software whose performance depends on someone else's system.

For example, let's assume you are offering a Web service and that your program depends, in turn, on a second Web service. Users of your system might encounter delays at two different points: your Web service (due to computational complexity, a lack of system resources or too many simultaneous requests), or the second Web service on which yours depends.

Several commercial companies, such as Google, eBay and Amazon, offer Web services to the general public. But, these services lack any sort of uptime or response guarantee and often restrict the number of requests you can make. If you write a Web service that depends on one of these others, a rise in requests to your service might well mean that you temporarily go over your limit with these services.

This is particularly true if you allow users to enter one or more inputs at a time. For example, if you're running a Web-based store, you want to let people put multiple items in their shopping baskets. It's easy to imagine a scenario in which each item in the shopping basket requires a call to one or more Web services. If each call takes one second, and if you are allowed to access the Web service only every six seconds, a user trying to buy ten items might end up waiting one minute just to see the final checkout screen. If a brick-and-mortar store were to keep you waiting for one minute, you would be frustrated. If an on-line store were to do the same thing, you probably would just pick up and leave.

So, what should you do? Well, you could simply throw up your hands and blame the lower-level service. Or, you could contact the lower-level service and try to negotiate a faster, better deal for yourself. Another option is to try to predict what inputs your users will be handing to you and try to preprocess them, perhaps at night, when fewer users are on your system.

I've recently come across this problem myself on some of the sites I've been developing in my consulting work. And, I believe I've found a technique that solves this problem without too much trouble and that demonstrates how Ajax programming techniques not only can add pizazz to a Web site, but make it more functional as well. This month, we take a look at the technique I've developed, which I call (for lack of a better term) incremental form submission.

The Problem

Before we continue, let's define the problem we are trying to solve. Users visiting our site are presented with an HTML form. The form contains a textarea widget, into which users can enter one or more words. When a user clicks on the submit button, a server-side program takes the contents of the textarea and sends it to a Web service that turns each word into its Pig Latin equivalent. The server-side program retrieves the results from the Web service and displays the Pig Latin on the user's screen as HTML.

It goes without saying that this example is somewhat contrived; although it might be nice to have a Web service that handles translations into Pig Latin, it takes so little time to do that translation (really, a simple text transformation), that storing or caching this information would be foolish. That said, this example is meant to provide food for thought, rather than a production-ready piece of software.

Let's start with our HTML file, shown in Listing 1. It contains a short HTML form with a textarea widget (named words) and a submit button.

Clicking on that submit button submits the form's contents to a CGI program written in Ruby, named pl-word.cgi (Listing 2). There are two sections to Listing 2. In the first part of the program, we define a method, pl_sentence, that takes a sentence (that is, a string), turns it into an array of strings (with each word in one string), and then passes that array to our Web service (via XML-RPC). The second half of the program takes the input from our POST request, passes it to the pl_sentence routine, and then uses the output from pl_sentence to create a bit of nicely formatted (if Spartan) output for the user.

The key to making all this work is shown in Listing 3, which provides the code for our XML-RPC server. We begin by reading from a simple cache of English words and their Pig Latin equivalents. Again, it seems silly to store things in this way, when it's much faster simply to write the code that handles the Pig Latin rules. If you imagine that each translation takes several seconds, you can see how things could pile up quickly.

There are several things to notice in this program. One of the first is the use of an on-disk cache to store recently processed inputs. (Please don't try to emulate the simple and foolish way in which I implemented this; I ignored locking and permission issues.) The cache itself is a simple text file containing name-value pairs. Before computing the Pig Latin translation of each item, the Web service consults the cache. If the word is in the cache, the service grabs that value and almost immediately returns the translated value.

If the word isn't in the cache, it translates the English into Pig Latin, storing the values for the next time around. Again, this ensures that we have to work hard (that is, translate the word into Pig Latin) only if it fails to appear in the cache.

If you've never programmed in Ruby before, you might be put off a bit by this line:

words.map {|word| word.to_s}.each do |word|

This tells Ruby that it should take the array named words and turn each of its elements into a string. (If the element already is a string, it is unaffected.) We then iterate over each string (word) in the array, assigning the local variable word to each element in sequence.

With Listings 1, 2 and 3 in place, you should be able to translate sentences from English into Pig Latin without too much difficulty. You enter the English words into the HTML form, the server-side program calls the Web service, and the Web service takes care of things quickly.

Improving Performance

Now we come to the hard, or interesting, part of this project. If you can imagine that each Pig Latin translation takes ten seconds to execute, but less than one second to retrieve from the cache, you would want the cache to be used as much as possible. Moreover, given how long each word lookup takes, users will need a great deal of patience to deal with it.

The solution? Use Prototype, a popular JavaScript framework. Its AjaxUpdater will submit the contents of the textarea widget to a URL of your choice automatically—in this case, the same one that is used for POST—in the background, each time the text area is changed. Then, each word is translated while the user is filling out the text form, dramatically reducing the time needed to translate.

In other words, I'm betting it will take enough time for users to enter the entire sentence, that I can collect and translate most or all of the translated words while they're typing. Also, because I know that the Web service is caching results, I can pass the contents of the entire textarea every few seconds, knowing that retrieving items from the cache is extremely rapid.

The key to this functionality is the use of the Form.Element.Observer object in JavaScript. This object allows us to monitor any form element over time, submitting the form's contents to an arbitrary URL when the form element changes. We will use this, along with our knowledge that the Pig Latin server (pl-server.rb) caches words it has already translated, to submit the form every few seconds, even before the user clicks the submit button.

We do this by adding an id attribute, whose value is words, to our textarea, and also by adding the following JavaScript code:

new Form.Element.Observer($("words"), 3, translateFunction);

In other words, we will check the words in textarea for changes every three seconds. If something has changed, the browser invokes the method translateFunction. This function is defined as follows:

function translateFunction() {

var myAjax = new Ajax.Request(
    '/pl-words.cgi',
    {
        parameters: Form.serialize('form')
    });
}

In other words, translateFunction creates a new Ajax request in the background, submitting the contents of the form to the URL /pl-words.cgi—the same program to which the form will be submitted at the end of the process. But, for our incremental submissions, we care more about the side effects (that is, the cached translations) than the resulting HTML. So, we ignore the output from pl-words.cgi.

Because of how we built our server-side programs, they don't need to change at all in order for this Ajax-style addition to take effect. All we need to do is modify the HTML file, adding a few lines of JavaScript.

Now, of course, this doesn't change the amount of time it takes to translate each word or even an entire sentence. But, that's not the point. Rather, what we're doing is taking advantage of the fact that many people tend to type slowly and that they'll take their time entering words into a textarea widget.

If users type quickly, or enter a very short sentence, we haven't really lost anything at all. It'll take a long time to translate those people's sentences, and they'll just have to wait it out. If people change their minds a great deal, it's possible we'll end up with all sorts of cached, translated words that are never going to be used again. But, given that the cache is shared across all users, it seems like a relatively small risk to take.

There are some things to consider if you're thinking of going this route—that is, combining an incremental form submission with a cache. First, notice we are iterating over each word in the textarea. This means there's the potential for someone to launch a denial-of-service attack against your server, simply by entering ridiculously long text strings into your textarea widget. One way to prevent this is to limit the number of words you check from any given textarea widget. You can, of course, limit the number of words you're willing to translate from the incremental submission, rather than from the complete and final submission.

Another item to remember is that you should not expose your inner APIs. APIs are for external use; the moment people know your internal data structures and methods, they might use them against you. These examples didn't include any cleaning or testing of the data that was passed to the server; in a real-world case, you probably would want to do that before simply passing it along to another program.

Finally, if your site becomes popular, you might need more than one server to handle Web services. That's fine, and it's even a good idea. But, how many servers should you get, and how should they store their data? One possibility, and something that I expect to write about in the coming months, is Amazon's EC2 (Electric Computing Cloud) technology, which allows you to launch an almost limitless number of Web servers quickly and for a reasonable price. Combining EC2 with this sort of caching Web service might work well, especially if you have a good method for sharing dynamic data among the servers.

Conclusion

Web services are a wonderful way for servers to share data. But, when a Web service becomes a bottleneck, and when we lack control over the size of the bottleneck, we must try to find creative solutions. This month, we looked at something that I call incremental posting, designed to spread the burden over time, as a user is typing. Even if this solution isn't quite right for you, perhaps you'll be inspired in some way to incorporate this, or other Ajax techniques, into your own sites.

Reuven M. Lerner, a longtime Web/database developer and consultant, is a PhD candidate in learning sciences at Northwestern University, studying on-line learning communities. He recently returned (with his wife and three children) to their home in Modi'in, Israel, after four years in the Chicago area.