The problem(s) we wanted to address:
- Scalability of test coverage - joint ownership of creation and maintenance of test cases.
- Moving test runs further upstream in the build process - providing
- Test runs and reports are hooked into a build process that the dev team
Presently, Web QA’s MDN tests are Python based -- suite of tests -- and run on a Jenkins instance which is firewalled from the public. Both of these components make
accessing and monitoring our test runs complex and an event that is separate from the developer workflow.
Developer-level unit tests run on a travis-ci instance and presently there is no hook that runs the Python based end-to-end tests if their build passes. The end-to-end tests run on a cron job and can also be started by an irc bot.
Additionally, our Python-based tests live outside of the developers project: github.com/mozilla/mdn-tests and github.com/mozilla/kuma. Adding an additional hurdle to maintainability and visibility.
Working closely with the MDN team I helped identify areas that an end-to-end testing framework needed to take into account and solve. Between David’s and my experiences with testing we were able to narrow our search to a concise set of requirements.
- Uses real browsers -- both on a test developer’s computer as well as the ability to use external services such as Sauce Labs.
- Generate actionable reports that quickly identify why and where a regression occurred
- An automatic mechanism that re-runs a test that fails due to a hiccup; such as network problems, etc.
- Choose an open source project that has a vibrant and active community
- Choose a framework and harness that is fun to use and provides quick results
- Hook the end-to-end tests into the build process
- Ability to run a single test case when developing a new testcase
- Ability to multithread and run n tests concurrently
David Walsh suggested theIntern.io and after evaluating it we quickly realized
that it hit all of our prerequites. I was quickly able to configure the
MDN travis-ci instance to use SauceLabs. Using the included DigDug library
for tunneling, tests were quickly able to run on Sauce Lab’s infrastructure.
- The important isDisplayed method of Intern's Leadfoot API, paired with the polling function, returns inconsistent results. The cause may be as simple as we’re misusing isDisplayed() or polling. This will take more investigation.
- Due to security restrictions Travis-CI cannot run end-to-end Webdriver based tests on a per pull request basis. Test jobs are limited to merges to master only. We need to update the .travis.yaml file to exclude the end-to-end tests when a pull request is made.
- Refactor the current tests to use the Page Object Model. This was a fairly large deliverable to piece together and get working, thus David and I decided to not worry about best coding practices when trying to get a minimally viable example working.
Before the the team is ready to trust using Leadfoot and Digdug we need to
solve the outstanding issues. If you are intestested in gettin involved,
please let us know.
- Remove the sleep statements from the code base -- this likely will involve better understanding how to use Promises.
- Get an understanding why some tests fail when run against SauceLabs -- the failures may be legitimate, we may be misusing the api, or their is a defect somewhere within the test harness.
- Refactor tests to use the Page Object Model.
After these few problems are solved, the team can include end-to-end into their
build process. This will allow timely and relevant per feature merge feedback
to developers on the project.