Improve Test Reliability for your Angular Apps with Protractor Flake

Improve Test Reliability for your Angular Apps with Protractor Flake

Prerequisite Knowledge

This article is geared towards programmers who are writing jasmine tests for AngularJS apps, utilizing the testing framework protractor, built on top of selenium-webdriver.

Introduction

Every programmer knows there is immense value in automated testing. When at the click of the button you can prove critical business workflows are operational, you gain a great deal of assurance that your code is working now, and will continue to work as code changes. With these critical tests tied into your build pipeline, every build and code check-in can kick off automated testing suite. Because inevitably something will go wrong, and the instant feedback allows the problem to be quickly identified and addressed. The bottom line is, solid testing lets you deliver code more quickly and confidently. On top of the safety and speed, I find there is simply a great deal of intrinsic joy seeing tests launching off in fluttering array of browsers, flying through tedious to test flows, and then seeing the PASSED lettering at the bottom of your console.

Serenity is all your tests passing

Still, it’d be a bit dishonest if I failed mention that getting to this point is a fairly lofty dream. It’s not easy; it’s wrought with difficulties, both culturally and technically. Between getting fellow developers to write tests in the first place, to not ignoring or disabling test failures, to integrating the tests into your build pipeline, to running in multiple environments/browsers, and simply having consistent results, it takes a lot of effort to reach testing Nirvana. Until you’ve established this solid testing culture, any existing test efforts will be tenuous and ungroomed, vulnerable to falling fallow and festering in the long run.

Sadly, I cannot devise a generic solution to meet every company’s testing needs, but, I can share a solution to one of the common problems: inconsistent test results.

The Problem of Inconsistent Tests

What ever could you mean by inconsistent test results?

Take this common scenario: you run a test once, and it it fails. But then you re-run it once, twice, ten times in a row afterwards and it passes every time.

To some, this is just a minor inconvenience. But truly, it is a non-trivial cost that can add up to some fairly insidious problems, two big ones are:
1. You delay your build process
2. You erode confidence in the tests

To add a little context, here is a high level overview of how our automated tests run here at Nitro:

  1. New code is checked in to the master branch
  2. Jenkins (our Continous Integration system) detects this source control revision modification, and rebuilds the testing environment using the latest code
  3. Once finished, the end to end test suite is sent off to https://saucelabs.com/
  4. The tests run in parallel, across multiple browsers (and operating systems)
  5. Based on the test suite results, the build is flagged as passed or failed

* Only builds tagged as passed may be promoted to production

Early on, we came to realize that our testing solution was fairly brittle due to problems arising in step #4. Often times, our end to end tests were running in multiple browsers, one single test, in one single browser, would fail, causing the entire build to be marked as a failure. Upon investigation, the problems were often one-off randoms, like a 3rd party library not loading, a service being temporarily unavailable, a conflict from randomly generated email address collisions, or the test timing out because it runs very fast locally but slowly on sauce.

Usually rerunning the end to end tests would usually solve the problem, but if another sporadic error occurred, we were back in the same problematic situation. The entire test suite runs in

protractor-flake --maxAttempts=10 -- protractorLocalChrome.js --specs=randomFail.js
Using the selenium server at http://localhost:4444/wd/hub
[launcher] Running 1 instances of WebDriver
Started
F

Failures:
1) Given a flakey test it should randomly fail, 75% of the time
Message:
Expected 3 to be 1.
Stack:
Error: Expected 3 to be 1.

1 spec, 1 failure
Re-running tests: test attempt 2
Re-running the following test files:
/Users/bbrandes/Desktop/nitroapps/account-portal/randomFail.js

Failures:
1) Given a flakey test it should randomly fail, 75% of the time
Message:
Expected 3 to be 1.

1 spec, 1 failure
Re-running tests: test attempt 3
Re-running the following test files:
/Users/bbrandes/Desktop/nitroapps/account-portal/randomFail.js

[launcher] Running 1 instances of WebDriver
Started
.

1 spec, 0 failures
Finished in 0.086 seconds
[launcher] 0 instance(s) of WebDriver still running
[launcher] chrome #01 passed

As you can see, in the above log, the test failed its first two runs, but the 3rd time was a charm, and the whole test suite was marked as passing. In the event it managed to get unlucky and failed 10 times, the whole suite would be marked for failure.

Closing Thoughts

Hopefully if you’re experiencing similar woes with sporadic test failures, protractor-flake can help you out. It’s an open source project (that I’ve contributed to–the author is very responsive!) that I definitely encourage you try out.

If there’s any other protractor testing related content anyone would like to see, feel free to reach out! Maybe next time I’ll do some tips and tricks for common protractor problems at the test-case level.