I love unit tests, but they're only useful if they get run. For one of my projects at work, I have a set of server-side unit tests, and a set of browser-side unit tests. The server-side unit tests get run automatically on “git push`‘ via Buildbot, but the browser-side tests haven't been run for a long time because they don't work in Firefox, which is my primary browser, due to differences in the way it iterates through object keys.
Of course, automation would help, in the same way that automating the server-side tests ensured that they were run regularly. Enter PhantomJS, which is a scriptable headless WebKit environment. Unfortunately, even though PhantomJS can support many different testing frameworks, there is no existing support for nodeunit, which is the testing framework that I'm using in this particular project. Fortunately, it isn't hard to script support for nodeunit.
nodeunit's built-in browser support just dynamicall builds a web page with the test results and a test summary. If we just ran it as-is in PhantomJS, it would happily run the tests for us, but we wouldn't be able to see the results, and it would just sit there doing nothing when it was done. What we want is for the test results to be output to the console, and to exit when the tests are done (and exit with an error code if tests failed). To do this, we will create a custom nodeunit reporter that will communicate with PhantomJS.
First, let's deal with the PhantomJS side. Our custom nodeunit reporter will use console.log to print the test results, so we will pass through console messages in PhantomJS.
page.onConsoleMessage = function (msg) { console.log(msg); };
We will use PhantomJS's callback functionality to signal the end of the tests. The callback data will just be an object containing the total number of assertions, the number of failed assertions, and the time taken.
page.onCallback = function (data) { if (data.failures) { console.log("FAILURES: " + data.failures + "/" + data.length + " assertions failed (" + data.duration + "ms)") } else { console.log("OK: " + data.length + " assertions (" + data.duration + "ms)"); } phantom.exit(data.failures); };
(Warning: the callback API is marked as experimental, so may be subject to change.)
If the test page fails to load for whatever reason, PhantomJS will just sit there doing nothing, which is not desirable behaviour, so we will exit with an error if something fails.
phantom.onError = function (msg, trace) { console.log("ERROR:", msg); for (var i = 0; i < trace.length; i++) { var t = trace[i]; console.log(i, (t.file || t.sourceURL) + ': ' + t.line + t.function ? t.function : ""); } phantom.exit(1); }; page.onError = function (msg, trace) { console.log("ERROR:", msg); for (var i = 0; i < trace.length; i++) { var t = trace[i]; console.log(i, (t.file || t.sourceURL) + ': ' + t.line + t.function ? t.function : ""); } phantom.exit(1); }; page.onLoadFinished = function (status) { if (status !== "success") { console.log("ERROR: page failed to load"); phantom.exit(1); } }; page.onResourceError = function (resourceError) { console.log("ERROR: failed to load " + resourceError.url + ": " + resourceError.errorString + " (" + resourceError.errorCode + ")"); phantom.exit(1); };
Now for the nodeunit side. The normal test page looks like this:
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>ML Editor Test Suite</title> <link rel="stylesheet" href="stylesheets/nodeunit.css" type="text/css" /> <script src="javascripts/module-requirejs.js" type="text/javascript"></script> <script src="javascripts/requirejs-config.js" type="text/javascript"></script> <script data-main="test" src="javascripts/require.js" type="text/javascript"></script> </head> <body> <h1 id="nodeunit-header">ML Editor Test Suite</h1> </body> </html>
If you're not familiar with RequireJS pages, the <script data-main="test" src="javascripts/require.js" type="text/javascript"></script> line means that the main JavaScript file is called "test.js". We want to use the same script file for both a normal browser test and the PhantomJS-based test, so in PhantomJS, we will set window.nodeunit_reporter to our custom reporter. In "test.js", then, we will check for window.nodeunit_reporter, and if it is present, we will replace nodeunit's default reporter. Although there's no documented way of changing the reporter in the browser version of nodeunit, looking at the code, it's pretty easy to do.
if (window.nodeunit_reporter) { nodeunit.reporter = nodeunit_reporter; nodeunit.run = nodeunit_reporter.run; }
(Disclaimer: since this uses an undocumented interface, it may break some time in the future.)
So what does a nodeunit reporter look like? It's just an object with two items: info (which is just a textual description) and run. run is a function that calls the nodeunit runner with a set of callbacks. I based the reporter off of a combination of nodeunit's default console reporter and its browser reporter.
window.nodeunit_reporter = { info: "PhantomJS-based test reporter", run: function (modules, options) { var opts = { moduleStart: function (name) { console.log("\n" + name); }, testDone: function (name, assertions) { if (!assertions.failures()) { console.log("✔ " + name); } else { console.log("✖ " + name); assertions.forEach(function (a) { if (a.failed()) { console.log(a.message || a.method || "no message"); console.log(a.error.stack || a.error); } }); } }, done: function (assertions) { window.callPhantom({failures: assertions.failures(), duration: assertions.duration, length: assertions.length}); } }; nodeunit.runModules(modules, opts); } };
Now in PhantomJS, I just need to get it to load a modified test page that sets window.nodeunit_reporter before loading "test.js", and voilà, I have browser tests running on the console. All that I need to do now is to add it to my Buildbot configuration, and then I will be alerted whenever I break a browser test.
The script may or may not work in SlimerJS, allowing the tests to be run in a Gecko-based rendering engine, but I have not tried it since, as I said before, my tests don't work in Firefox. One main difference, though, is that SlimerJS doesn't honour the exit code, so Buildbot would need to parse the output to determine whether the tests passed or failed.