E49y9ibfknmjqownj4rg
SkillsCast

Hive CI: Making Automation Scale for iPlayer

29th October 2015 in London at Business Design Centre

There are 92 other SkillsCasts available from Droidcon 2015

Please log in to watch this conference skillscast.

Https s3.amazonaws.com prod.tracker2 resource 41088130 skillsmatter conference skillscast o9nohu

Our remit at the BBC means we have to reach as wide an audience as possible. When you consider how this applies to our iPlayer mobile app, we have to support a huge variety of devices and os combinations. Daunted by the amount of manual testing we would have to perform, we invested heavily into automation to reduce our manual efforts.

We quickly built up a large and successful suite of automated tests and could run these on a single device driven by our CI system. When it came to making this scale to the large number of devices we wanted to support, we really struggled.

The difficulties we faced were:

  • how to manage and run tests on multiple devices
  • keeping devices stable and ready to run tests
  • dealing with false positives and intermittent failures
  • managing the huge number of tests we’d accrued and the GBs of results we were generating

We struggled to scale our approach using conventional CI tools. We wanted a system that could help us in three areas:

  • managing the physical devices and keeping them in a ready-state to run tests
  • scheduling and running tests across multiple connected devices
  • collating and interpreting results across a single build.

We built a custom CI system, which we call Hive CI to help us overcome these problems. Hive CI was designed to be device aware, and test aware giving us greater control over how we run our tests, what tests we run, and what devices they run on. Our system could be used by any team in the BBC, for any mobile testing project, using any testing framework.

Now that we could run all our tests on all our devices, we found ourselves with a completely new set of problems. The effort of maintaining over 200 tests across all our physical devices was a full time job. And left no time for investigating the millions of test results we generated every day. We needed a more intelligent approach to what we ran. We solved this in three ways:

  • breaking our test suites into smaller suites focusing on specific domains (core journeys, statistics, accessibility) and reducing our on-commit tests to a core set of journeys -- what we call PUMAs
  • using our monitoring to identify the highest reach devices and operating systems
  • expanding the Hive result engine to be able to differentiate between genuine failures and intermittent problems.

YOU MAY ALSO LIKE:

Hive CI: Making Automation Scale for iPlayer

Jitesh Gosai

Currently working with the Mobile platforms teams within the BBC with teams such as iPlayer Mobile and iPlayer Radio to identify and establish test automation approaches that have aided the teams to move faster and release quicker than before.

David Buckhurst

David Buckhurst is a Technical Architect at the BBC leading the Test Engineering Team and providing the strategic thinking for wider testing problems in the corporation.