JavaScript All the Way Down

Federico Kereki

Issue #250, February 2015

Use JavaScript for server and client programming.

There is a well known story about a scientist who gave a talk about the Earth and its place in the solar system. At the end of the talk, a woman refuted him with “That's rubbish; the Earth is really like a flat dish, supported on the back of a turtle.” The scientist smiled and asked back “But what's the turtle standing on?”, to which the woman, realizing the logical trap, answered, “It's very simple: it's turtles all the way down!” No matter the verity of the anecdote, the identity of the scientist (Bertrand Russell or William James are sometimes mentioned), or even if they were turtles or tortoises, today we may apply a similar solution to Web development, with “JavaScript all the way down”.

If you are going to develop a Web site, for client-side development, you could opt for Java applets, ActiveX controls, Adobe Flash animations and, of course, plain JavaScript. On the other hand, for server-side coding, you could go with C# (.Net), Java, Perl, PHP and more, running on servers, such as Apache, Internet Information Server, Nginx, Tomcat and the like. Currently, JavaScript allows you to do away with most of this and use a single programming language, both on the client and the server sides, and with even a JavaScript-based server. This way of working even has produced a totally JavaScript-oriented acronym along the lines of the old LAMP (Linux+Apache+MySQL+PHP) one: MEAN, which stands for MongoDB (a NoSQL database you can access with JavaScript), Express (a Node.js module to structure your server-side code), Angular.JS (Google's Web development framework for client-side code) and Node.js.

In this article, I cover several JavaScript tools for writing, testing and deploying Web applications, so you can consider whether you want to give a twirl to a “JavaScript all the way down” Web stack.

Why JavaScript?

Although stacks like LAMP or its Java, Ruby or .Net peers do power many Web applications today, using a single language both for client- and server-side development has several advantages, and companies like Groupon, LinkedIn, Netflix, PayPal and Walmart, among many more, are proof of it.

Modern Web development is split between client-side and server-side (or front-end and back-end) coding, and striving for the best balance is more easily attained if your developers can work both sides with the same ease. Of course, plenty of developers are familiar with all the languages needed for both sides of coding, but in any case, it's quite probable that they will be more productive at one end or the other.

Many tools are available for JavaScript (building, testing, deploying and more), and you'll be able to use them for all components in your system (Figure 1). So, by going with the same single set of tools, your experienced JavaScript developers will be able to play both sides, and you'll have fewer problems getting the needed programmers for your company.

Figure 1. JavaScript can be used everywhere, on the client and the server sides.

Of course, being able to use a single language isn't the single key point. In the “old days” (just a few years ago!), JavaScript lived exclusively in browsers to read and interpret JavaScript source code. (Okay, if you want to be precise, that's not exactly true; Netscape Enterprise Server ran server-side JavaScript code, but it wasn't widely adopted.) About five years ago, when Firefox and Chrome started competing seriously with (by then) the most popular Internet Explorer, new JavaScript engines were developed, separated from the layout engines that actually drew the HTML pages seen on browsers. Given the rising popularity of AJAX-based applications, which required more processing power on the client side, a competition to provide the fastest JavaScript started, and it hasn't stopped yet. With the higher performance achieved, it became possible to use JavaScript more widely (Table 1).

Table 1. The Current Browsers and Their JavaScript Engines

BrowserJavaScript Engine
ChromeV8
FirefoxSpiderMonkey
OperaCarakan
SafariNitro

Some of these engines apply advanced techniques to get the most speed and power. For example, V8 compiles JavaScript to native machine code before executing it (this is called JIT, Just In Time compilation, and it's done on the run instead of pre-translating the whole program as is traditional with compilers) and also applies several optimization and caching techniques for even higher throughput. SpiderMonkey includes IonMonkey, which also is capable of compiling JavaScript code to object code, although working in a more traditional way. So, accepting that modern JavaScript engines have enough power to do whatever you may need, let's now start a review of the Web stack with a server that wouldn't have existed if it weren't for that high-level language performance: Node.js.

Node.js: a New Kind of Server

Node.js (or plain Node, as it's usually called) is a Web server, mainly written itself in JavaScript, which uses that language for all scripting. It originally was developed to simplify developing real-time Web sites with push capabilities—so instead of all communications being client-originated, the server might start a connection with a client by itself. Node can work with lots of live connections, because it's very lightweight in terms of requirements. There are two key concepts to Node: it runs a single process (instead of many), and all I/O (database queries, file accesses and so on) is implemented in a non-blocking, asynchronous way.

Let's go a little deeper and further examine the main difference between Node and more traditional servers like Apache. Whenever Apache receives a request, it starts a new, separate thread (process) that uses RAM of its own and CPU processing power. (If too many threads are running, the request may have to wait a bit longer until it can be started.) When the thread produces its answer, the thread is done. The maximum number of possible threads depends on the average RAM requirements for a process; it might be a few thousand at the same time, although numbers vary depending on server size (Figure 2).

Figure 2. Apache and traditional Web servers run a separate thread for each request.

On the other hand, Node runs a single thread. Whenever a request is received, it is processed as soon as it's possible, and it will run continuously until some I/O is required. Then, while the code waits for the I/O results to be available, Node will be able to process other waiting requests (Figure 3). Because all requests are served by a single process, the possible number of running requests rises, and there have been experiments with more than one million concurrent connections—not shabby at all! This shows that an ideal use case for Node is having server processes that are light in CPU processing, but high on I/O. This will allow more requests to run at the same time; CPU-intensive server processes would block all other waiting requests and produce a high drop in output.

Figure 3. Node runs a single thread for all requests.

A great asset of Node is that there are many available modules (an estimate ran in the thousands) that help you get to production more quickly. Though I obviously can't list all of them, you probably should consider some of the modules listed in Table 2.

Table 2. Some widely used Node.js modules that will help your development and operation.

ModuleDescription
asyncSimplifies asynchronous work, a possible alternative to promises.
clusterImproves concurrency in multicore systems by forking worker processes. (For further scalability, you also could set up a reverse proxy and run several Node.js instances, but that goes beyond the objective of this article.)
connectWorks with “middleware” for common tasks, such as error handling, logging, serving static files and more.
ejs, handlebars or jade, EJSTemplating engines.
expressA minimal Web framework—the E in MEAN.
foreverA command-line tool that will keep your server up, restarting if needed after a crash or other problem.
mongoose, cradle, sequelizeDatabase ORM, for MongoDB, CouchDB and for relational databases, such as MySQL and others.
passportAuthentication middleware, which can work with OAuth providers, such as Facebook, Twitter, Google and more.
request or superagentHTTP clients, quite useful for interacting with RESTful APIs.
underscore or lodashTools for functional programming and for extending the JavaScript core objects.

Of course, there are some caveats when using Node.js. An obvious one is that no process should do heavy computations, which would “choke” Node's single processing thread. If such a process is needed, it should be done by an external process (you might want to consider using a message queue for this) so as not to block other requests. Also, care must be taken with error processing. An unhandled exception might cause the whole server to crash eventually, which wouldn't bode well for the server as a whole. On the other hand, having a large community of users and plenty of fully available, production-level, tested code already on hand can save you quite a bit of development time and let you set up a modern, fast server environment.

Planning and Organizing Your Application

When starting out with a new project, you could set up your code from zero and program everything from scratch, but several frameworks can help you with much of the work and provide clear structure and organization to your Web application. Choosing the right framework will have an important impact on your development time, on your testing and on the maintainability of your site. Of course, there is no single answer to the question “What framework is best?”, and new frameworks appear almost on a daily basis, so I'm just going with three of the top solutions that are available today: AngularJS, Backbone and Ember. Basically, all of these frameworks are available under permissive licenses and give you a head start on developing modern SPA (single page applications). For the server side, several packages (such as Sails, to give just one example) work with all frameworks.

AngularJS (or Angular.JS or just plain Angular—take your pick) was developed in 2009 by Google, and its current version is 1.3.4, dated November 2014. The framework is based on the idea that declarative programming is best for interfaces (and imperative programming for the business logic), so it extends HTML with custom tag attributes that are used to bind input and output data to a JavaScript model. In this fashion, programmers don't have to manipulate the Web page directly, because it is updated automatically. Angular also focuses on testing, because the difficulty of automatic testing heavily depends upon the code structure. Note that Angular is the A in MEAN, so there are some other frameworks that expand on it, such as MEAN.IO or MEAN.JS.

Backbone is a lighter, leaner framework, dated from 2010, which uses a RESTful JSON interface to update the server side automatically. (Fun fact: Backbone was created by Jeremy Ashkenas, who also developed CoffeeScript; see the “What's in a Name?” sidebar.) In terms of community size, it's second only to Angular, and in code size, it's by far the smallest one. Backbone doesn't include a templating engine of its own, but it works fine with Underscore's templating, and given that this library is included by default, it is a simple choice to make. It's considered to be less “opinionated” than other frameworks and to have a quite shallow learning curve, which means that you'll be able to start working quickly. A deficiency is that Backbone lacks two-way data binding, so you'll have to write code to update the view whenever the model changes and vice versa. Also, you'll probably be manipulating the Web page directly, which will make your code harder to unit test.

Finally, Ember probably is harder to learn than the other frameworks, but it rewards the coder with higher performance. It favors “convention over configuration”, which likely will make Ruby on Rails or Symfony users feel right at home. It integrates easily with a RESTful server side, using JSON for communication. Ember includes Handlebars (see Table 2) for templating and provides two-way updates. A negative point is the usage of <script> tags for markers, in order to keep templates up to date with the model. If you try to debug a running application, you'll find plenty of unexpected elements!

Simplify and Empower Your Coding

It's a sure bet that your application will need to work with HTML, handle all kinds of events and do AJAX calls to connect with the server. This should be reasonably easy—although it might be plenty of work—but even today, browsers do not have exactly the same features. Thus, you might have to go overboard with specific browser-detection techniques, so your code will adapt and work everywhere. Modern application users have grown accustomed to working with different events (tap, double tap, long tap, drag and drop, and more), and you should be able to include that kind of processing in your code, possibly with appropriate animations. Finally, connecting to a server is a must, so you'll be using AJAX functions all the time, and it shouldn't be a painful experience.

The most probable candidate library to help you with all these functions is jQuery. Arguably, it's the most popular JavaScript library in use today, employed at more than 60% of the most visited Web sites. jQuery provides tools for navigating your application's Web document, handles events with ease, applies animations and uses AJAX (Listing 1). Its current version is 2.1.1 (or 1.11.1, if you want to support older browsers), and it weighs in at only around 32K. Some frameworks (Angular, for example) even will use it if available.

Other somewhat less used possibilities could be Prototype (current version 1.7.2), MooTools (version 1.5.1) or Dojo Toolkit (version 11). One of the key selling points of all these libraries is the abstraction of the differences between browsers, so you can write your code without worrying if it will run on such or such browser. You probably should take a look at all of them to find which one best fits your programming style.

Also, there's one more kind of library you may want. Callbacks are familiar to JavaScript programmers who need them for AJAX calls, but when programming for Node, there certainly will be plenty of them! You should be looking at “promises”, a way of programming that will make callback programming more readable and save you from “callback hell”—a situation in which you need a callback, and that callback also needs a callback, which also needs one and so on, making code really hard to follow. See Listing 2, which also shows the growing indentation that your code will need. I'm omitting error-processing code, which would make the example even messier!

The behavior of promises is standardized through the “Promises/A+” open specification. Several packages provide promises (jQuery and Dojo already include some support for them), and in general, they even can interact, processing each other's promises. A promise is an object that represents the future value of an (usually asynchronous) operation. You can process this value through the promise .then(...) method and handle exceptions with its .catch(...) method. Promises can be chained, and a promise can produce a new promise, the value of which will be processed in the next .then(...). With this style, the callback hell example of Listing 2 would be converted into more understandable code; see Listing 3. Code, instead of being more and more indented, stays aligned to the left. Callbacks still are being (internally) used, but your code doesn't explicitly work with them. Error handling is also simpler; you simply would add appropriate .catch(...) calls.

You also can build promises out of more promises—for example, a service might need the results of three different callbacks before producing an answer. In this case, you could build a new single promise out of the three individual promises and specify that the new one will be fulfilled only when the other three have been fulfilled. There also are other constructs that let you fulfill a promise when a given number (possibly just one) of “sub-promises” have been fulfilled. See the Resources section for several possible libraries you might want to try.

I have commented on several tools you might use to write your application, so now let's consider the final steps: building the application, testing it and eventually deploying it for operation.

Testing Your Application

No matter whether you program on your own or as a part of a large development group, testing your code is a basic need, and doing it in an automated way is a must. Several frameworks can help you with this, such as Intern, Jasmine or Mocha (see Resources). In essence, they are really similar. You define “suites”, each of which runs one or more “test cases”, which test that your code does some specific function. To test results and see if they satisfy your expectations, you write “assertions”, which basically are conditions that must be satisfied (see Listing 4 for a simple example). You can run test suites as part of the build process (which I explain below) to see if anything was broken before attempting to deploy the newer version of your code.

Tests can be written in “fluent” style, using many matchers (see Listing 5 for some examples). Several libraries provide different ways to write your tests, including Chai, Unit.js, Should.js and Expect.js; check them out to decide which one suits you best.

If you want to run tests that involve a browser, PhantomJS and Zombie provide a fake Web environment, so you can run tests with greater speed than using tools like Selenium, which would be more appropriate for final acceptance tests.

Building and Deploying

Whenever your code is ready for deployment, you almost certainly will have to do several repetitive tasks, and you'd better automate them. Of course, you could go with classic tools like make or Apache's ant, but keeping to the “JavaScript all the way down” idea, let's look at a pair of tools, Grunt and Gulp, which work well.

Grunt can be installed with npm. Do sudo npm install -g grunt-cli, but this isn't enough; you'll have to prepare a gruntfile to let it know what it should do. Basically, you require a package.json file that describes the packages your system requires and a Gruntfile.js file that describes the tasks you need. Tasks may have subtasks of their own, and you may choose to run the whole task or a specific subtask. For each task, you will define (in JavaScript, of course) what needs to be done (Listing 7). Running grunt with no parameters will run a default (if given) task or the whole gamut of tasks.

Gulp is somewhat simpler to set up (in fact, it was created to simplify Grunt's configuration files), and it depends on what its authors call “code-over-configuration”. Gulp works in “stream” or “pipeline” fashion, along the lines of Linux's command line, but with JavaScript plugins. Each plugin takes one input and produces one output, which automatically is fed to the next plugin in the queue. This is simpler to understand and set up, and it even may be faster for tasks involving several steps. On the other hand, being a newer project implies a smaller community of users and fewer available plugins, although both situations are likely to work out in the near future.

You can use them either from within a development environment (think Eclipse or NetBeans, for example), from the command line or as “watchers”, setting them up to monitor specific files or directories and run certain tasks whenever changes are detected to streamline your development process further in a completely automatic way. You can set up things so that templates will be processed, code will be minified, SASS or LESS styles will be converted in pure CSS, and the resulting files will be moved to the server, wherever it is appropriate for them. Both tools have their fans, and you should try your hand at both to decide which you prefer.

Conclusion

Modern fast JavaScript engines, plus the availability of plenty of specific tools to help you structure, test or deploy your systems make it possible to create Web applications with “JavaScript all the way down”, helping your developers be more productive and giving them the possibility of working on both client and server sides with the same tools they already are proficient with. For modern development, you certainly should give this a thought.

Federico Kereki is a Uruguayan systems engineer with more than 25 years of experience developing systems, doing consulting work and teaching at universities. He currently is working as a UI Architect at Globant, using a good mixture of development frameworks, programming tools and operating systems—and FLOSS, whenever possible! A couple years ago, he wrote the Essential GWT book, in which you also can find some security concerns for Web applications. You can reach Federico at fkereki@gmail.com.